You are on page 1of 172

OVERVIEW

Language education is the teaching and learning of a language. It can include improving a learner's
mastery of her or his native language, but the term is more commonly used with regard to second
language acquisition, which means the learning of a foreign or second language and which is the
topic of this article. Language education is a branch of applied linguistics.
History of foreign language education
Ancient to medieval period
Although the need to learn foreign languages is almost as old as human history itself, the origins of
modern language education are in the study and teaching of Latin in the 17th century. Latin had for
many centuries been the dominant language of education, commerce, religion, and government in
much of the Western world, but it was displaced by French, Italian, and English by the end of the
16th century. John Amos Comenius was one of many people who tried to reverse this trend. He
composed a complete course for learning Latin, covering the entire school curriculum, culminating in
his Opera Didactica Omnia, 1657.
In this work, Comenius also outlined his theory of language acquisition. He is one of the first theorists
to write systematically about how languages are learned and about pedagogical methodology for
language acquisition. He held that language acquisition must be allied with sensation and experience.
Teaching must be oral. The schoolroom should have models of things, and failing that, pictures of
them. As a result, he also published the world's first illustrated children's book, Orbis Sensualim
Pictus. The study of Latin diminished from the study of a living language to be used in the real world
to a subject in the school curriculum. Such decline brought about a new justification for its study. It
was then claimed that its study developed intellectual abilities, and the study of Latin grammar
became an end in and of itself.
"Grammar schools" from the 16th to 18th centuries focused on teaching the grammatical aspects of
Classical Latin. Advanced students continued grammar study with the addition of rhetoric.
18th century
The study of modern languages did not become part of the curriculum of European schools until the
18th century. Based on the purely academic study of Latin, students of modern languages did much
of the same exercises, studying grammatical rules and translating abstract sentences. Oral work was
minimal, and students were instead required to memorize grammatical rules and apply these to
decode written texts in the target language. This tradition-inspired method became known as the
'grammar-translation method'.
19th20th century
The examples and perspective in this article deal primarily with the United States and do not
represent a worldwide view of the subject. Please improve this article and discuss the issue on the
talk page.
Henry Sweet was a key figure in establishing the applied linguistics tradition in language teaching
Innovation in foreign language teaching began in the 19th century and became very rapid in the 20th
century. It led to a number of different and sometimes conflicting methods, each trying to be a major
improvement over the previous or contemporary methods. The earliest applied linguists included
Jean Manesca, Heinrich Gottfried Ollendorff (1803-1865), Henry Sweet (1845-1912), Otto Jespersen
(1860-1943), and Harold Palmer (18771949). They worked on setting language teaching principles
and approaches based on linguistic and psychological theories, but they left many of the specific
practical details for others to devise.
Those looking at the history of foreign-language education in the 20th century and the methods of
teaching (such as those related below) might be tempted to think that it is a history of failure. Very
Page | 1

few students in U.S. universities who have a foreign language as a major manage to reach something
called "minimum professional proficiency". Even the "reading knowledge" required for a PhD degree
is comparable only to what second-year language students read and only very few researchers who
are native English speakers can read and assess information written in languages other than English.
Even a number of famous linguists are monolingual.
However, anecdotal evidence for successful second or foreign language learning is easy to find,
leading to a discrepancy between these cases and the failure of most language programs, which
helps make the research of second language acquisition emotionally charged. Older methods and
approaches such as the grammar translation method or the direct method are dismissed and even
ridiculed as newer methods and approaches are invented and promoted as the only and complete
solution to the problem of the high failure rates of foreign language students.
Most books on language teaching list the various methods that have been used in the past, often
ending with the author's new method. These new methods are usually presented as coming only
from the author's mind, as the authors generally give no credence to what was done before and do
not explain how it relates to the new method. For example, descriptive linguists[who?] seem to claim
unhesitatingly that there were no scientifically-based language teaching methods before their work
(which led to the audio-lingual method developed for the U.S. Army in World War II). However, there
is significant evidence to the contrary. It is also often inferred or even stated that older methods
were completely ineffective or have died out completely when even the oldest methods are still used
(e.g. the Berlitz version of the direct method). One reason for this situation is that proponents of new
methods have been so sure that their ideas are so new and so correct that they could not conceive
that the older ones have enough validity to cause controversy. This was in turn caused by emphasis
on new scientific advances, which has tended to blind researchers to precedents in older work.
There have been two major branches in the field of language learning; the empirical and theoretical,
and these have almost completely separate histories, with each gaining ground over the other at one
point in time or another. Examples of researchers on the empiricist side are Jesperson, Palmer, and
Leonard Bloomfield, who promote mimicry and memorization with pattern drills. These methods
follow from the basic empiricist position that language acquisition basically results from habits
formed by conditioning and drilling. In its most extreme form, language learning is seen as basically
the same as any other learning in any other species, human language being essentially the same as
communication behaviors seen in other species.
On the theoretical side are, for example, Francois Gouin, M.D. Berlitz, and Elime de Sauz, whose
rationalist theories of language acquisition dovetail with linguistic work done by Noam Chomsky and
others. These have led to a wider variety of teaching methods ranging from the grammar-translation
method to Gouin's "series method" to the direct methods of Berlitz and de Sauz. With these
methods, students generate original and meaningful sentences to gain a functional knowledge of the
rules of grammar. This follows from the rationalist position that man is born to think and that
language use is a uniquely human trait impossible in other species. Given that human languages
share many common traits, the idea is that humans share a universal grammar which is built into our
brain structure. This allows us to create sentences that we have never heard before but that can still
be immediately understood by anyone who understands the specific language being spoken. The
rivalry of the two camps is intense, with little communication or cooperation between them.
Methods of teaching foreign languages
Language education may take place as a general school subject or in a specialized language school.
There are many methods of teaching languages. Some have fallen into relative obscurity and others
are widely used; still others have a small following, but offer useful insights.
While sometimes confused, the terms "approach", "method" and "technique" are hierarchical
concepts. An approach is a set of correlative assumptions about the nature of language and
language learning, but does not involve procedure or provide any details about how such
Page | 2

assumptions should translate into the classroom setting. Such can be related to second language
acquisition theory.
There are three principal views at this level:
1. The structural view treats language as a system of structurally related elements to code meaning
(e.g. grammar).
2. The functional view sees language as a vehicle to express or accomplish a certain function, such
as requesting something.
3. The interactive view sees language as a vehicle for the creation and maintenance of social
relations, focusing on patterns of moves, acts, negotiation and interaction found in conversational
exchanges. This view has been fairly dominant since the 1980s.
A method is a plan for presenting the language material to be learned and should be based upon a
selected approach. In order for an approach to be translated into a method, an instructional system
must be designed considering the objectives of the teaching/learning, how the content is to be
selected and organized, the types of tasks to be performed, the roles of students and the roles of
teachers. A technique is a very specific, concrete stratagem or trick designed to accomplish an
immediate objective. Such are derived from the controlling method, and less-directly, with the
approach.
The grammar translation method
The grammar translation method instructs students in grammar, and provides vocabulary with direct
translations to memorize. It was the predominant method in Europe in the 19th century. Most
instructors now acknowledge that this method is ineffective by itself. It is now most commonly used
in the traditional instruction of the classical languages.
At school, the teaching of grammar consists of a process of training in the rules of a language which
must make it possible to all the students to correctly express their opinion, to understand the
remarks which are addressed to them and to analyze the texts which they read. The objective is that
by the time they leave college, the pupil controls the tools of the language which are the vocabulary,
grammar and the orthography, to be able to read, understand and write texts in various contexts.
The teaching of grammar examines texts, and develops awareness that language constitutes a
system which can be analyzed. For example, many Spanish teachers like to use "La Gran Aventura de
Alejandro" to teach their students, because while many young Spanish natives would find the book
simple to read, the average person learning Spanish would find it ideal.. This knowledge is acquired
gradually, by traversing the facts of language and the syntactic mechanisms, going from simplest to
the most complex. The exercises according to the program of the course must untiringly be practiced
to allow the assimilation of the rules stated in the course.[citation needed] That supposes that the
teacher corrects the exercises. The pupil can follow his progress in practicing the language by
comparing his results. Thus can he adapt the grammatical rules and control little by little the internal
logic of the syntactic system. The grammatical analysis of sentences constitutes the objective of the
teaching of grammar at the school. Its practice makes it possible to recognize a text as a coherent
whole and conditions the training of a foreign language. Grammatical terminology serves this
objective. Grammar makes it possible for each one to understand how the mother tongue functions,
in order to give him the capacity to communicate its thought.
The direct method
The direct method, sometimes also called natural method, is a method that refrains from using the
learners' native language and just uses the target language. It was established in Germany and
France around 1900 and are best represented by the methods devised by Berlitz and de Sauz
although neither claim originality and has been re-invented under other names. The direct method
operates on the idea that second language learning must be an imitation of first language learning, as
Page | 3

this is the natural way humans learn any language - a child never relies on another language to learn
its first language, and thus the mother tongue is not necessary to learn a foreign language. This
method places great stress on correct pronunciation and the target language from outset. It
advocates teaching of oral skills at the expense of every traditional aim of language teaching. Such
methods rely on directly representing an experience into a linguistic construct rather than relying on
abstractions like mimicry, translation and memorizing grammar rules and vocabulary.
According to this method, printed language and text must be kept away from second language
learner for as long as possible, just as a first language learner does not use printed word until he has
good grasp of speech. Learning of writing and spelling should be delayed until after the printed word
has been introduced, and grammar and translation should also be avoided because this would
involve the application of the learner's first language. All above items must be avoided because they
hinder the acquisition of a good oral proficiency.
The method relies on a step-by-step progression based on question-and-answer sessions which begin
with naming common objects such as doors, pencils, floors, etc. It provides a motivating start as the
learner begins using a foreign language almost immediately. Lessons progress to verb forms and
other grammatical structures with the goal of learning about thirty new words per lesson.

The series method


In the 19th century, Francois Gouin went to Hamburg to learn German. Based on his experience as a
Latin teacher, he thought the best way to do this would be memorize a German grammar book and a
table of its 248 irregular verbs. However, when he went to the academy to test his new language
skills, he was disappointed to find out that he could not understand anything. Trying again, he
similarly memorized the 800 root words of the language as well as re-memorizing the grammar and
verb forms. However, the results were the same. During this time, he had isolated himself from
people around him, so he tried to learn by listening, imitating and conversing with the Germans
around him, but found that his carefully-constructed sentences often caused native German speakers
to laugh. Again he tried a more classical approach, translation, and even memorizing the entire
dictionary but had no better luck.
When he returned home, he found that his three-year-old nephew had learned to speak French. He
noticed the boy was very curious and upon his first visit to a mill, he wanted to see everything and be
told the name of everything. After digesting the experience silently, he then reenacted his
experiences in play, talking about what he learned to whoever would listen or to himself. Gouin
decided that language learning was a matter of transforming perceptions into conceptions, using
language to represent what one experiences. Language is not an arbitrary set of conventions but a
way of thinking and representing the world to oneself. It is not a conditioning process, but one in
which the learner actively organizes his perceptions into linguistics concepts.
Variation of direct method
The series method is a variety of the direct method (above) in that experiences are directly
connected to the target language. Gouin felt that such direct "translation" of experience into words,
makes for a "living language". (p59) Gouin also noticed that children organize concepts in succession
of time, relating a sequence of concepts in the same order. Gouin suggested that students learn a
language more quickly and retain it better if it is presented through a chronological sequence of
events. Students learn sentences based on an action such as leaving a house in the order in which
such would be performed. Gouin found that if the series of sentences are shuffled, their
memorization becomes nearly impossible. For this, Gouin preceded psycholinguistic theory of the
20th century. He found that people will memorize events in a logical sequence, even if they are not
presented in that order. He also discovered a second insight into memory called "incubation".
Linguistic concepts take time to settle in the memory. The learner must use the new concepts
Page | 4

frequently after presentation, either by thinking or by speaking, in order to master them. His last
crucial observation was that language was learned in sentences with the verb as the most crucial
component. Gouin would write a series in two columns: one with the complete sentences and the
other with only the verb. With only the verb elements visible, he would have students recite the
sequence of actions in full sentences of no more than twenty-five sentences. Another exercise
involved having the teacher solicit a sequence of sentences by basically ask him/her what s/he would
do next. While Gouin believed that language was rule-governed, he did not believe it should be
explicitly taught.
His course was organized on elements of human society and the natural world. He estimated that a
language could be learned with 800 to 900 hours of instruction over a series of 4000 exercises and no
homework. The idea was that each of the exercises would force the student to think about the
vocabulary in terms of its relationship with the natural world. While there is evidence that the
method can work extremely well, it has some serious flaws. One of which is the teaching of
subjective language, where the students must make judgments about what is experienced in the
world (e.g. "bad" and "good") as such do not relate easily to one single common experience.
However, the real weakness is that the method is entirely based on one experience of a three-yearold. Gouin did not observe the child's earlier language development such as naming (where only
nouns are learned) or the role that stories have in human language development. What distinguishes
the series method from the direct method is that vocabulary must be learned by translation from the
native language, at least in the beginning.
The oral approach / situational language teaching
The oral approach was developed from the 1930s to the 1960s by British applied linguists such as
Harold Palmer and A.S. Hornsby. They were familiar with the Direct method as well as the work of
19th century applied linguists such as Otto Jesperson and Daniel Jones but attempted to formally
develop a scientifically-founded approach to teaching English than was evidenced by the Direct
Method.
A number of large-scale investigations about language learning and the increased emphasis on
reading skills in the 1920s led to the notion of "vocabulary control". It was discovered that languages
have a core basic vocabulary of about 2,000 words that occurred frequently in written texts, and it
was assumed that mastery of these would greatly aid reading comprehension. Parallel to this was the
notion of "grammar control", emphasizing the sentence patterns most-commonly found in spoken
conversation. Such patterns were incorporated into dictionaries and handbooks for students. The
principle difference between the oral approach and the direct method was that methods devised
under this approach would have theoretical principles guiding the selection of content, gradation of
difficulty of exercises and the presentation of such material and exercises. The main proposed
benefit was that such theoretically-based organization of content would result in a less-confusing
sequence of learning events with better contextualization of the vocabulary and grammatical
patterns presented. Last but not least, all language points were to be presented in "situations".
Emphasis on this point led to the approach's second name. Such learning in situ would lead to
students' acquiring good habits to be repeated in their corresponding situations. Teaching methods
stress PPP (presentation (introduction of new material in context), practice (a controlled practice
phase) and production (activities designed for less-controlled practice)).
Although this approach is all but unknown among language teachers today, elements of it have had
long lasting effects on language teaching, being the basis of many widely-used English as a
Second/Foreign Language textbooks as late as the 1980s and elements of it still appear in current
texts. Many of the structural elements of this approach were called into question in the 1960s,
causing modifications of this method that lead to Communicative language teaching. However, its
emphasis on oral practice, grammar and sentence patterns still finds widespread support among

Page | 5

language teachers and remains popular in countries where foreign language syllabuses are still
heavily based on grammar.
The audio-lingual method
The audio-lingual method was developed around World War II when governments realized that they
needed more people who could conduct conversations fluently in a variety of languages, work as
interpreters, code-room assistants, and translators. However, since foreign language instruction in
that country was heavily focused on reading instruction, no textbooks, other materials or courses
existed at the time, so new methods and materials had to be devised. For example, the U.S. Army
Specialized Training Program created intensive programs based on the techniques Leonard
Bloomfield and other linguists devised for Native American languages, where students interacted
intensively with native speakers and a linguist in guided conversations designed to decode its basic
grammar and learn the vocabulary. This "informant method" had great success with its small class
sizes and motivated learners.
The U.S. Army Specialized Training Program only lasted a few years, but it gained a lot of attention
from the popular press and the academic community. Charles Fries set up the first English Language
Institute at the University of Michigan, to train English as a second or foreign language teachers.
Similar programs were created later at Georgetown University, University of Texas among others
based on the methods and techniques used by the military. The developing method had much in
common with the British oral approach although the two developed independently. The main
difference was the developing audio-lingual methods allegiance to structural linguistics, focusing on
grammar and contrastive analysis to find differences between the student's native language and the
target language in order to prepare specific materials to address potential problems. These materials
strongly emphasized drill as a way to avoid or eliminate these problems.
This first version of the method was originally called the oral method, the aural-oral method or the
structural approach. The audio-lingual method truly began to take shape near the end of the 1950s,
this time due government pressure resulting from the space race. Courses and techniques were
redesigned to add insights from behaviorist psychology to the structural linguistics and constructive
analysis already being used. Under this method, students listen to or view recordings of language
models acting in situations. Students practice with a variety of drills, and the instructor emphasizes
the use of the target language at all times. The idea is that by reinforcing 'correct' behaviors,
students will make them into habits.
Due to weaknesses in performance, and more importantly because of Noam Chomsky's theoretical
attack on language learning as a set of habits, audio-lingual methods are rarely the primary method
of instruction today. However, elements of the method still survive in many textbooks.
Communicative language teaching
Communicative language teaching (CLT), also known as the Communicative Approach, emphasizes
interaction as both the means and the ultimate goal of learning a language. Despite a number of
criticisms it continues to be popular, particularly in Europe, where constructivist views on language
learning and education in general dominate academic discourse. Although the 'Communicative
Language Teaching' is not so much a method on its own as it is an approach.
In recent years, task-based language learning (TBLL), also known as task-based language teaching
(TBLT) or task-based instruction (TBI), has grown steadily in popularity. TBLL is a further refinement
of the CLT approach, emphasizing the successful completion of tasks as both the organizing feature
and the basis for assessment of language instruction. Dogme language teaching shares a philosophy
with TBL, although differs in approach. Dogme is a communicative approach to language teaching
and encourages teaching without published textbooks and instead focusing on conversational
communication among the learners and the teacher.
Language immersion
Page | 6

Language immersion in school contexts delivers academic content through the medium of a foreign
language, providing support for L2 learning and first language maintenance. There are three main
types of immersion education programs in the United States: foreign language immersion, dual
immersion, and indigenous immersion.
Foreign language immersion programs in the U.S. are designed for students whose home language is
English. In the early immersion model, for all or part of the school day elementary school children
receive their content (academic) instruction through the medium of another language: Spanish,
French, German, Chinese, Japanese, etc. In early total immersion models, children receive all the
regular kindergarten and first grade content through the medium of the immersion language; English
reading is introduced later, often in the second grade. Most content (math, science, social studies,
art, music) continues to be taught through the immersion language. In early partial immersion
models, part of the school day (usually 50%) delivers content through the immersion language, and
part delivers it through English. French-language immersion programs are common in Canada in the
provincial school systems, as part of the drive towards bilingualism and are increasing in number in
the United States in public school systems (Curtain & Dahlbert, 2004). Branaman & Rhodes (1998)
report that between 1987-1997 the percentage of elementary programs offering foreign language
education in the U.S. through immersion grew from 2% to 8% and Curtain & Dahlberg (2004) report
278 foreign language immersion programs in 29 states. Research by Swain and others (Genesee
1987) demonstrate much higher levels of proficiency achieved by children in foreign language
immersion programs than in traditional foreign language education elementary school models.
Dual immersion programs in the U.S. are designed for students whose home language is English as
well as for students whose home language is the immersion language (usually Spanish). The goal is
bilingual students with mastery of both English and the immersion language. As in partial foreign
language immersion academic content is delivered through the medium of the immersion language
for part of the school day, and through English the rest of the school day.
Indigenous immersion programs in the U.S. are designed for American Indian communities desiring
to maintain the use of the native language by delivering elementary school content through the
medium of that language. Hawaiian Immersion programs are the largest and most successful in this
category.
Minimalist/methodist
Paul Rowe's minimalist/methodist approach. This new approach is underpinned with Paul Nation's
three actions of successful ESL teachers.[citation needed] Initially it was written specifically for
unqualified, inexperienced people teaching in EFL situations. However, experienced language
teachers are also responding positively to its simplicity. Language items are usually provided using
flashcards. There is a focus on language-in-context and multi-functional practices.
Directed practice
Directed practice has students repeat phrases. This method is used by U.S. diplomatic courses. It can
quickly provide a phrasebook-type knowledge of the language. Within these limits, the student's
usage is accurate and precise. However the student's choice of what to say is not flexible.
Learning by teaching
Main article: Learning by teaching
Learning by teaching is a widespread method in Germany, developed by Jean-Pol Martin. The
students take the teacher's role and teach their peers.
Proprioceptive language learning method
The proprioceptive language learning method (commonly called the feedback training method)
emphasizes simultaneous development of cognitive, motor, neurological, and hearing as all being
Page | 7

part of a comprehensive language learning process. Lesson development is as concerned with the
training of the motor and neurological functions of speech as it is with cognitive (memory) functions.
It further emphasizes that training of each part of the speech process must be simultaneous. The
proprioceptive method, therefore, emphasizes spoken language training, and is primarily used by
those wanting to perfect their speaking ability in a target language.
The proprioceptive method virtually stands alone as a second language acquisition (SLA) method in
that it bases its methodology on a speech pathology model. It stresses that mere knowledge (in the
form of vocabulary and grammar memory) is not the sole requirement for spoken language fluency,
but that the mind receives real-time feedback from both hearing and neurological receptors of the
mouth and related organs in order to constantly regulate the store of vocabulary and grammar
memory in the mind during speech.
For optimum effectiveness, it maintains that each of the components of second language acquisition
must be encountered simultaneously. It therefore advocates that all memory functions, all motor
functions and their neurological receptors, and all feedback from both the mouth and ears must
occur at exactly the same moment in time of the instruction. Thus, according to the proprioceptive
method, all student participation must be done at full speaking volume. Further, in order to train
memory, after initial acquaintance with the sentences being repeated, all verbal language drills must
be done as a response to the narrated sentences which the student must repeat (or answer) entirely
apart from reading a text.

Silent Way
The Silent Way is a discovery learning approach, invented by Caleb Gattegno in the 1950s. It is often
considered to be one of the humanistic approaches. It is called the Silent Way because the teacher is
usually silent, leaving room for the students to talk and explore the language. The students are
responsible for their own learning and are encouraged to interact with one another. The role of the
teacher is to give clues to the students, not to model the language.
Pimsleur method
Pimsleur language learning system is based on the research of and model programs developed by
American language teacher Paul Pimsleur. It involves recorded 30-minute lessons to be done daily,
with each lesson typically featuring a dialog, revision, and new material. Students are asked to
translate phrases into the target language, and occasionally to respond in the target language to lines
spoken in the target language. The instruction starts in the student's language but gradually changes
to the target language. Several all-audio programs now exist to teach various languages using the
Pimsleur Method. The syllabus is the same in all languages.

Michel Thomas Method


Michel Thomas Method is an audio-based teaching system developed by Michel Thomas, a language
teacher in the USA. It was originally done in person, although since his death it is done via recorded
lessons. The instruction is done entirely in the student's own language, although the student's
responses are always expected to be in the target language. The method focuses on constructing
long sentences with correct grammar and building student confidence. There is no listening practice,
and there is no reading or writing. The syllabus is ordered around the easiest and most useful
features of the language, and as such is different for each language.

Page | 8

Other
Several methodologies that emphasise understanding language in order to learn, rather than
producing it, exist as varieties of the comprehension approach. These include total physical response
and the natural approach of Stephen Krashen and Tracy D. Terrell.
Suggestopedia is a method that experienced popularity especially in past years, with both staunch
supporters and very strong critics, some claiming it is based on pseudoscience.
There is a lot of language learning software using the multimedia capabilities of computers.
Learning strategies
Code switching
Code switching, that is, changing between languages at some point in a sentence or utterance, is a
commonly used communication strategy among language learners and bilinguals. While traditional
methods of formal instruction often discourage code switching, students, especially those placed in a
language immersion situation, often use it. If viewed as a learning strategy, wherein the student uses
the target language as much as possible but reverts to their native language for any element of an
utterance that they are unable to produce in the target language, then it has the advantages that it
encourages fluency development and motivation and a sense of accomplishment by enabling the
student to discuss topics of interest to him or her early in the learning processbefore requisite
vocabulary has been memorized. It is particularly effective for students whose native language is
English, due to the high probability of a simple English word or short phrase being understood by the
conversational partner.
Blended learning
Blended learning combines face-to-face teaching with distance education, frequently electronic,
either computer-based or web-based. It has been a major growth point in the ELT (English Language
Teaching) industry over the last ten years.
Some people, though, use the phrase 'Blended Learning' to refer to learning taking place while the
focus is on other activities. For example, playing a card game that requires calling for cards may allow
blended learning of numbers (1 to 10).
Skills teaching
When talking about language skills, the four basic ones are: listening, speaking, reading and writing.
However, other, more socially-based skills have been identified more recently such as summarizing,
describing, narrating etc. In addition, more general learning skills such as study skills and knowing
how one learns have been applied to language classrooms.
In the 1970s and 1980s the four basic skills were generally taught in isolation in a very rigid order,
such as listening before speaking. However, since then, it has been recognized that we generally use
more than one skill at a time, leading to more integrated exercises. Speaking is a skill that often is
underrepresented in the traditional classroom. This could be due to the fact that it is considered a
less-academic skills than writing, is transient and improvised (thus harder to assess and teach
through rote imitation).
More recent textbooks stress the importance of students working with other students in pairs and
groups, sometimes the entire class. Pair and group work give opportunities for more students to
participate more actively. However, supervision of pairs and groups is important to make sure
everyone participates as equally as possible. Such activities also provide opportunities for peer
teaching, where weaker learners can find support from stronger classmates.
Language education by region
Europe
Page | 9

Foreign language education


1995 European Commissions White Paper "Teaching and learning Towards the learning society",
stated that "upon completing initial training, everyone should be proficient in two Community
foreign languages". The Lisbon Summit of 2000 defined languages as one of the five key skills.
In fact, even in 1974, at least one foreign language was compulsory in all but two European member
states (Ireland and the United Kingdom, apart from Scotland). By 1998 nearly all pupils in Europe
studied at least one foreign language as part of their compulsory education, the only exception being
the Republic of Ireland, where primary and secondary schoolchildren learn both Irish and English, but
neither is considered a foreign language although a third European language is also taught. Pupils in
upper secondary education learn at least two foreign languages in Belgium's Flemish community,
Denmark, Netherlands, Germany, Luxembourg, Finland, Sweden, Switzerland, Greece, Cyprus,
Estonia, Latvia, Lithuania, Poland, Romania, Serbia, Slovenia and Slovakia.
On average in Europe, at the start of foreign language teaching, pupils have lessons for three to four
hours a week. Compulsory lessons in a foreign language normally start at the end of primary school
or the start of secondary school. In Luxembourg, Norway, Italy and Malta, however, the first foreign
language starts at age six, and in Belgium's Flemish community at age 10. About half of the EU's
primary school pupils learn a foreign language.
English is the language taught most often at lower secondary level in the EU. There, 93% of children
learn English. At upper secondary level, English is even more widely taught. French is taught at lower
secondary level in all EU countries except Slovenia. A total of 33% of European Union pupils learn
French at this level. At upper secondary level the figure drops slightly to 28%. German is taught in
nearly all EU countries. A total of 13% of pupils in the European Union learn German in lower
secondary education, and 20% learn it at an upper secondary level.
Despite the high rate of foreign language teaching in schools, the number of adults claiming to speak
a foreign language is generally lower than might be expected. This is particularly true of native
English speakers: in 2004 a British survey showed that only one in 10 UK workers could speak a
foreign language. Less than 5% could count to 20 in a second language, for example. 80% said they
could work abroad anyway, because "everyone speaks English." In 2001, a European Commission
survey found that 65.9% of people in the UK spoke only their native tongue.
Since the 1990s, the Common European Framework of Reference for Languages has tried to
standardize the learning of languages across Europe (one of the first results being UNIcert).
Bilingual education
In some countries, learners have lessons taken entirely in a foreign language: for example, more than
half of European countries with a minority or regional language community use partial immersion to
teach both the minority and the state language.
In the 1960s and 1970s, some central and eastern European countries created a system of bilingual
schools for well-performing pupils. Subjects other than languages were taught in a foreign language.
In the 1990s this system was opened to all pupils in general education, although some countries still
make candidates sit an entrance exam. At the same time, Belgium's French community, France, the
Netherlands, Austria and Finland also started bilingual schooling schemes. Germany meanwhile had
established some bilingual schools in the late 1960s.
United States
In most school systems, foreign language is taken in high school, with many schools requiring one to
three years of foreign language in order to graduate. In some school systems, foreign language is also
taught during middle school, and recently, many elementary schools have begun teaching foreign
languages as well. However, foreign language immersion programs are growing in popularity, making
it possible for elementary school children to begin serious development of a second language.
Page | 10

In late 2009 the Center for Applied Linguistics completed an extensive survey documenting foreign
language study in the United States. The most popular language is Spanish, due to the large number
of recent Spanish-speaking immigrants to the United States (see Spanish in the United States).
According to this survey, in 2008 88% of language programs in elementary schools taught Spanish,
compared to 93% in secondary schools. Other languages taught in U.S. high schools in 2008, in
descending order of frequency, were French, German, Latin,Chinese, American Sign Language,
Italian, and Japanese. During the Cold War, the United States government pushed for Russian
education, and some schools still maintain their Russian programs . Other languages recently gaining
popularity include Arabic.
Australia
Prior to European colonisation, there were hundreds of Aboriginal languages, taught in a traditional
way. The arrival of a substantial number of Irish in the first English convict ships meant that European
Australia was not ever truly monolingual. When the goldrushes of the 1850s trebled the white
population, it brought many more Welsh speakers, who had their own language newspapers through
to the 1870s, but the absence of language education meant that these Celtic languages never
flourished.
Waves of European migration after World War II brought "community languages," sometimes with
schools. However, from 1788 until modern times it was generally expected that immigrants would
learn English and abandon their first language (Clyne, 1997). The wave of multicultural policies since
the 1970s has softened aspects of these attitudes.
In 1982 a bipartisan committee of Australian parliamentarians was appointed and identified a
number of guiding principles that would support a National Policy on Languages (NPL). Its trend was
towards bilingualism in all Australians, for reasons of fairness, diversity and economics.
In the 1990s the Australian Languages and Literacy Policy (ALLP) was introduced, building on the NPL,
with extra attention being given to the economic motivations of second language learning. A
distinction became drawn between priority languages and community languages. The ten priority
languages identified were Mandarin, French, German, Modern Greek, Indonesian, Japanese, Italian,
Korean, Spanish and Aboriginal languages.
However, Australia's federal system meant that the NPL and ALLP direction was really an overall
policy from above without much engagement from the states and territories. The NALSAS strategy
united Australian Government policy with that of the states and territories. It focused on four
targeted languages: Mandarin, Indonesian, Japanese and Korean. This would be integrated into
studies of Society and Environment, English and Arts.
By 2000, the top ten languages enrolled in the final high school year were, in descending order:
Japanese, French, German, Chinese, Indonesian, Italian, Greek, Vietnamese, Spanish and Arabic. In
2002, only about 10% of Year 12 included at least one Language Other Than English (LOTE) among
their course choices.
Japan
Language study holidays
An increasing number of people are now combining holidays with language study in the native
country. This enables the student to experience the target culture by meeting local people. Such a
holiday often combines formal lessons, cultural excursions, leisure activities, and a homestay,
perhaps with time to travel in the country afterwards. Language study holidays are popular across
Europe and Asia due to the ease of transportation and variety of nearby countries. These holidays
have become increasingly more popular in South America in such countries as Ecuador and
Peru.
Page | 11

With the increasing prevalence of international business transactions, it is now important to have
multiple languages at one's disposal. This is also evident in businesses outsourcing their departments
to Eastern Europe.[citation needed]
Language education on the Internet
The Internet has emerged as a powerful medium to teach and learn foreign languages. Websites that
provide language education on the Internet may be broadly classified under 3 categories:
1. Language exchange websites
2. Language portals
3. Virtual online schools
4. Support websites
Language exchange websites
Language exchange facilitates language learning by placing users with complementary language skills
in contact with each other. For instance, User A is a native Spanish speaker and wants to learn
English; User B is a native English speaker and wants to learn Spanish. Language exchange websites
essentially treat knowledge of a language as a commodity, and provide a market like environment for
the commodity to be exchanged. Users typically contact each other via text chat, voice-over-IP, or
email.
Language exchanges have also been viewed as a helpful tool to aid language learning at language
schools. Language exchanges tend to benefit oral proficiency, fluency, colloquial vocabulary
acquisition, and vernacular usage, rather than formal grammar or writing skills.
Portals that provide language content
There are a number of Internet portals that offer language content, some in interactive form.
Content typically includes phrases with translation in multiple languages, text to speech engines
(TTS), learning activities such as quizzes or puzzles based on language concepts. While some of this
content is free, a large fraction of the content on offer is available for a fee, especially where the
content is tailored to the needs of language tests such as TOEFL, for the United States.
In general, language education on the Internet provides a good supplement to real world language
schooling. However, the commercial nature of the Internet, including pop-up and occasionally
irrelevant text or banner ads might be seen as a distraction from a good learning experience.
Virtual world-based language schools
These are schools operating online in MMOs and virtual worlds. Unlike other language education on
the Internet, virtual world schools are usually designed as an alternative to physical schools. In 2005,
the virtual world Second Life started to be used for foreign language tuition.
Foreign language English has gained an online presence, with several schools operating entirely
online, and the British Council which has focused on the Teen Grid. Spains language and cultural
institute Instituto Cervantes has an "island" on Second Life. A list of educational projects (including
some language schools) in Second Life can be found on the SimTeach site.
Support websites for language teachers
Since the 90's, institutions like the Language Resource Centers , the British Council Teaching English,
the Goethe Institut Deutsch Lehren , the FIPLV and the French Ministry of foreign affairs with Francparler are developing support websites for language teachers. A support website is a reference area
for language teachers.
The reach of these websites is international. They are at once an information site, a resource centre
and a forum for discussion and sharing resources. Regularly updated and accessible to everyone,
Page | 12

their purpose is to support teachers in their professional practice by facilitating exchange, with a
view to lifelong learning and promoting linguistic diversity. The language resource center should be a
sort of backbone, running through the different training systems (initial and in-service) and linking
up with other online support tools. It helps teachers develop new competences, especially in the
field of educational ICT and innovative practices, to support and enhance language teaching. It
promotes the use of the CEFR Council of Europe and the ELP European Language Portfolio. Lastly, a
support website helps language teachers acquire a clearer vision of their own practices, through
discussions not only with their peers but also with educational institutions and associations. SAEL
guide
Minority language education
Minority language education policy
The principle policy arguments in favor of promoting minority language education are the need for
multilingual workforces, intellectual and cultural benefits and greater inclusion in global information
society. Access to education in a minority language is also seen as a human right as granted by the
European Convention on Human Rights and Fundamental Freedoms, the European Charter for
Regional or Minority Languages and the UN Human Rights Committee. Bilingual Education has been
implemented in many countries including the United States, in order to promote both the use and
appreciation of the minority language, as well as the majority language concerned.
Materials and e-learning for minority language education
Suitable resources for teaching and learning minority languages can be difficult to find and access,
which has led to calls for the increased development of materials for minority language teaching. The
internet offers opportunities to access a wider range of texts, audios and videos. Language learning
2.0 (the use of web 2.0 tools for language education) offers opportunities for material development
for lesser-taught languages and to bring together geographically dispersed teachers and learners.

Acronyms and abbreviations


CALL: computer-assisted language learning
CLIL: content and language integrated learning
CLL: community language learning
DELF: diplme d'tudes en langue franaise
EFL English as a foreign language
ELT English language teaching
FLL foreign language learning
FLT foreign language teaching
L1: first language, native language, mother tongue
L2: second language (or any additional language)
Page | 13

LDL: Lernen durch Lehren (German for learning by teaching)


SLA: second language acquisition
TELL: technology-enhanced language learning
TEFL: teaching English as a foreign language N.B. This article is about travel-teaching.
TEFLA: teaching English as a foreign language to adults
TPR: total physical response
TPRS: Teaching Proficiency through Reading and Storytelling
UNIcert is a European language education system of many universities based on the Common
European Framework of Reference for Languages.

Page | 14

Acquisition-learning hypothesis
In modern linguistics, there are many theories as to how humans are able to develop language
ability. According to Stephen Krashen's acquisition-learning hypothesis, there are two independent
ways in which we develop our linguistic skills: acquisition and learning. This theory is at the core of
modern language acquisition theory, and is perhaps the most fundamental of Krashen's theories on
Second Language Acquisition.
Acquisition
Acquisition of language is a subconscious process of which the individual is not aware. One is
unaware of the process as it is happening and when the new knowledge is acquired, the acquirer
generally does not realize that he or she possesses any new knowledge. According to Krashen, both
adults and children can subconsciously acquire language, and either written or oral language can
be acquired. This process is similar to the process that children undergo when learning their native
language. Acquisition requires meaningful interaction in the target language, during which the
acquirer is focused on meaning rather than form.
Learning
Learning a language, on the other hand, is a conscious process, much like what one experiences in
school. New knowledge or language forms are represented consciously in the learner's mind,
frequently in the form of language "rules" and "grammar" and the process often involves error
correction.[1]. Language learning involves formal instruction, and according to Krashen, is less
effective than acquisition.

Acculturation
Acculturation is the process whereby the attitudes and/or behaviors of people from one culture
are modified as a result of contact with a different culture. Acculturation implies a mutual
influence in which elements of two cultures mingle and merge. It has been hypothesized that in
order for acculturation to occur, some relative cultural equality has to exist between the giving and
the receiving culture. In contrast, assimilation is a process of cultural absorption of a minority
group into the main cultural body. In assimilation, the tendency is for the ruling cultural group to
enforce the adoption of their values rather than the blending of values. From a practical point of
view it may be hard to differentiate between acculturation and assimilation, for it is difficult to judge
whether people are free or not free to choose one or another aspect of a culture. The term "ethnic
identity" has sometimes been used in association with acculturation, but the two terms should be
distinguished. The concept of acculturation deals broadly with changes in cultural attitudes
between two distinct cultures. The focus is on the group rather than the individual, and on how
minority or immigrant groups relate to the dominant or host society. Ethnic identity may be thought
of as an aspect of acculturation in which the concern is with individuals and how they relate to
their own group as a subgroup of the larger society.
Acculturation is a complex concept, and two distinct models have guided its definition: a linear
model and a two-dimensional model. The linear model is based on the assumption that a strong
ethnic identity is not possible among those who become involved in the mainstream society and that
acculturation is inevitably accompanied by a weakening of ethnic identity. Alternatively, the twodimensional model suggests that both the relationship with the traditional or ethnic culture and the
relationship with the new or dominant culture play important roles in the process. Using the twodimensional model, J. W. Berry has suggested that there are four possible outcomes of the
acculturation process: assimilation (movement toward the dominant culture), integration
(synthesis of the two cultures), rejection (reaffirmation of the traditional culture), or
marginalization (alienation from both cultures). Similarly, Sodowsky and Plake have defined three
Page | 15

dimensions of acculturation: assimilation, biculturalism (the ability to live in both worlds, with denial
of neither), and observance of traditionality (rejection of the dominant culture).
The term "acculturation" was first used in anthropology in the late 1800s. Early studies dealt with the
patterns in Indian-Spanish assimilation and acculturation in Central and South America, the
consequences of contact between Native American tribes and whites, and the study of the culture of
Haiti as a derivative of West African and French patterns. Increasingly, the importance of
acculturation has been recognized in the social sciences, sociology, psychology, epidemiology, and
public health.

Action Research
Action research is a reflective process of progressive problem solving led by individuals working with
others in teams or as part of a "community of practice" to improve the way they address issues and
solve problems. Action research can also be undertaken by larger organizations or institutions,
assisted or guided by professional researchers, with the aim of improving their strategies, practices,
and knowledge of the environments within which they practice. As designers and stakeholders,
researchers work with others to propose a new course of action to help their community improve
its work practices (Center for Collaborative Action Research). Kurt Lewin, then a professor at MIT,
first coined the term action research in about 1944, and it appears in his 1946 paper Action
Research and Minority Problems. In that paper, he described action research as a comparative
research on the conditions and effects of various forms of social action and research leading to social
action that uses a spiral of steps, each of which is composed of a circle of planning, action, and
fact-finding about the result of the action.
Overview
Action research is an interactive inquiry process that balances problem solving actions
implemented in a collaborative context with data-driven collaborative analysis or research to
understand underlying causes enabling future predictions about personal and organizational
change (Reason & Bradbury, 2001). After six decades of action research development, many
methodologies have evolved that adjust the balance to focus more on the actions taken or more on
the research that results from the reflective understanding of the actions. This tension exists
between
-those that are more driven by the researchers agenda to those more driven by participants;
-those that are motivated primarily by instrumental goal attainment to those motivated primarily by
the aim of personal, organizational, or societal transformation; and
1st-, to 2nd-, to 3rd-person research, that is, my research on my own action, aimed primarily at
personal change; our research on our group (family/team), aimed primarily at improving the group;
and scholarly research aimed primarily at theoretical generalization and/or large scale change.
Action research challenges traditional social science, by moving beyond reflective knowledge created
by outside experts sampling variables to an active moment-to-moment theorizing, data collecting,
and inquiring occurring in the midst of emergent structure. Knowledge is always gained through
action and for action. From this starting point, to question the validity of social knowledge is to
question, not how to develop a reflective science about action, but how to develop genuinely wellinformed action how to conduct an action science (Torbert 2001).

Affective filter

Page | 16

The affective filter is an impediment to learning or acquisition caused by negative emotional


("affective") responses to one's environment. It is a hypothesis of second language acquisition
theory, and a field of interest in educational psychology.
Major components of the hypothesis
According to the affective filter hypothesis, certain emotions, such as anxiety, self-doubt, and mere
boredom interfere with the process of acquiring a second language. They function as a filter
between the speaker and the listener that reduces the amount of language input the listener is
able to understand. These negative emotions prevent efficient processing of the language input.
The hypothesis further states that the blockage can be reduced by sparking interest, providing low
anxiety environments and bolstering the learner's self-esteem.
History
Since Stephen Krashen first proposed this hypothesis in the 1970s, a considerable amount of
research has been done to test its claims. While the weight of that research is still not definitive, the
hypothesis has gained increasing support.

Aphasia
Aphasia is a communication disorder that occurs after language has been developed, usually in
adulthood. Not simply a speech disorder, aphasia can affect the ability to comprehend the speech
of others, as well as the ability to read and write. In most instances, intelligence per se is not
affected.
Description
Aphasia has been known since the time of the ancient Greeks. However, it has been the focus of
scientific study only since the mid-nineteenth century. Although aphasia can be caused by a head
injury and neurologic conditions, its most common cause is stroke, a disruption of blood flow to
the brain, which affects brain metabolism in localized areas of the brain. The onset of aphasia is
usually abrupt, and occurs in individuals who have had no previous speech or language problems.
Aphasia is at its most severe immediately after the event that causes it. Although its severity
commonly diminishes over time through both natural, spontaneous recovery from brain damage and
from clinical intervention, individuals who remain aphasic for two or three months after its onset are
likely to have some residual aphasia for the rest of their lives. However, positive changes often
continue to occur, largely with clinical intervention, for many years. The severity of aphasia is related
to a number of factors, including the severity of the condition that brought it about, general overall
health, age at onset, and numerous personal characteristics that relate to motivation.
Demographics
The National Aphasia Association estimates that approximately 2540% of stroke survivors develop
aphasia. There are approximately one million persons in the United States with aphasia, and roughly
100,000 new cases occur each year. There are more people with aphasia than with Parkinson's
disease, cerebral palsy, or muscular dystrophy.
Causes and symptoms
Although aphasia occasionally results from damage to subcortical structures such as basal ganglia or
the thalamus that has rich interconnections to the cerebral cortex, aphasia is most frequently
caused by damage to the cerebral cortex of the brain's left hemisphere. This hemisphere plays a
significant role in the processing of language skills. However, in about half of left-handed individuals
(and a few right-handed persons), this pattern of dominance for language is reversed, making righthemisphere damage the cause of aphasia in this small minority. Because the left side of the brain
controls movement on the right side of the body (and vice versa), paralysis affecting the side of the
Page | 17

body opposite the side of brain damage is a frequent co-existing problem. This condition is called
hemiplegia and can affect walking, using one's arm, or both. If the arm used for writing is paralyzed,
it poses an additional burden on the diminished writing abilities of some aphasic individuals. If
paralysis affects the many muscles involved in speaking, such as the muscles of the tongue, this
condition is called dysarthria. Dysarthria often co-occurs with aphasia.
There are a few more problems that can result from the same brain injury that produces aphasia, and
complicate its presentation. Most notable among them are the problems collectively called apraxia,
which influences one's ability to program movement. Apraxic difficulties make voluntary movements
difficult and hard to initiate. Apraxia of speech results in difficulty initiating speech and in making
speech sounds consistently. It frequently co-occurs with both dysarthria and aphasia. Finally, sensory
problems such as visual field deficits (specifically, hemianopsia) and changes in (or absence of)
sensation in arms, legs, and tongue commonly occur with aphasia.
There are neurological disorders other than aphasia that also manifest difficulty with language. This
makes it important to note what aphasia is not. Traumatic brain injury and dementias such as
Alzheimer's disease are excellent examples. Although brain injury is a cause of aphasia, most head
injuries produce widespread brain damage and result in other neuropsychological and cognitive
disorders. These disorders often create language that is disturbed in output and form, but are
typically the linguistic consequences of cognitive disturbances. In Alzheimer's disease, the situation is
much the same. Language spoken by individuals with Alzheimer's reflect their cognitive problems,
and, as such, differ from the language retrieval problems typically designated as aphasia. In short, if
the damage that results in language problems is general and produces additional intellectual
problems, then aphasia is a correct diagnosis. In the absence of other significant intellectual
problems, then the language disorder is probably localized to the brain's language processing areas
and is properly termed aphasia.
Finally, aphasia is not conventionally used to refer to the developmental language learning problems
encountered by some atypically developing children. However, when children who have been
previously developing language normally have a stroke or some other type of localized brain damage,
then the aphasia diagnosis is appropriate.
Aphasia manifests different language symptoms and syndromes as a result of where in the languagedominant hemisphere the damage has occurred. The advent of neuroimaging has improved the
ability to localize the area of brain damage. Nevertheless, the different general patterns of language
strengths and weaknesses, as well as unexpected dissociations in language function, can explain how
normal language is processed in the brain, as well as provide insights into intervention for aphasia.
Aphasic individuals almost uniformly have some difficulty in using the substantive words of their
native language. Most experts in aphasia recognize that aphasia varies along two major dimensions:
auditory comprehension ability and fluency of speech output. In reality, aphasic behaviors vary
greatly from individual to individual, and fluctuate in a given individual as a result of fatigue and
other factors. In addition, largely in relationship to lesion size, aphasias differ in overall severity.
Nonfluent aphasia
Frontal cortex is responsible for shaping, initiating, and producing behaviors. Individuals with
nonfluent aphasia characteristically have brain damage affecting Broca's area of the cortex and the
frontal brain areas surrounding it. These areas are responsible for formulating sound, word, and
sentence patterns. Damage to the anterior speech areas results in slow, labored speech with limited
output and prosody and difficulty in producing grammatical sentences. Because the motor cortex is
closely adjacent, nonfluent Broca's aphasia, by far the most common nonfluent variant, is quite likely
to co-occur with motor problems.
Several additional characteristics of nonfluent aphasia can be noted: in nonfluent aphasia verbs and
prepositions are disproportionately affected; speech errors occur mostly at the level of speech
Page | 18

sounds, producing sound transpositions and inconsistencies; auditory comprehension is only


minimally affected; reading abilities parallel comprehension, writing problems parallel speech
output, but are sometimes further complicated by hemiplegia; finally, there is an inability to repeat
what someone else says.
Fluent aphasia
Fluent aphasias occur when damage occurs in the posterior language areas of the brain, where
sensory stimuli from hearing, sight, and bodily sensation converge. In fluent aphasia, the prosody and
flow of speech is maintained; one typically must listen closely to recognize that the speech is not
normal. Because this posterior damage is located far from the motor areas in the frontal lobes,
individuals with fluent aphasia seldom have co-existing difficulty with the mechanics of speech, arm
use, or walking. There are three major variants of fluent aphasia, each thought to occur as a function
of disruption to different posterior brain regions.
WERNICKE'S APHASIA Wernicke's aphasia results from temporal lobe damage, where auditory
input to the brain is received. The essential characteristic is that individuals with this disorder have
disproportionate(oransz) difficulty in understanding spoken and written language. They also have
problems comprehending and monitoring their own speech. They are often verbose(gereksiz
szlerle dolu), and frequently use inappropriate and even jargon words when they speak. Reading
and writing are impaired in similar ways to auditory comprehension and speech output. Their
comprehension difficulties preclude their being able to repeat others' words.
ANOMIC APHASIA Most people, particularly as they grow older, have trouble with the names of
persons and things; all aphasic persons experience these difficulties. But when brain damage occurs
in the area of the posterior brain where information from temporal, parietal, and occipital lobes
converge, this problem of naming is much more pervasive than for normal and aphasic speakers
alike. Most anomic aphasic individuals have excellent auditory comprehension and read well. But for
most of them, writing mirrors speech, and individuals with anomic aphasia can take advantage of
words provided by others. Hence, their repetition ability is good. Although anomic aphasia is
classified as a fluent syndrome, frequent stops, starts, and word searches typically make speech
choppy in between runs of fluency.
CONDUCTION APHASIA Individuals with conduction aphasia are thought to have a discrete brain
lesion that disrupts the pathways that underlie the cortex and connect the anterior and posterior
speech regions. These individuals have good comprehension, as well as high awareness of the errors
that they make. Placement of their brain damage also suggests that there should be little
interference with speech production, reading, and writing. However, damage to the neural links
between posterior and anterior speech areas makes it quite difficult for these individuals to correct
the errors they hear themselves making. Conduction aphasia also affects the ability to repeat the
speech of others or to take advantage of the cues others provide. The speech of individuals with this
problem includes many inappropriate words, typically involving inappropriate sequences of sounds.
UNUSUAL APHASIA SYNDROMES There are a few other rare aphasic syndromes (called "transcortical
aphasias") and unique dissociations in aphasic patterns. The above aphasias represent the most
common distinctive syndromes. However, they are estimated to account for only approximately 40%
of individuals with aphasia.
MIXED AND GLOBAL APHASIA The remaining majority, about 60% of aphasic individuals, have
aphasias that result from brain lesions involving both the anterior and posterior speech areas. Their
aphasias, thus, affect both speech production and comprehension. They frequently have reading and
writing disorders as well. Individuals with mixed and global aphasia are also very likely to have
hemiplegia and dysarthria, as well as a variety of sensation losses. Depending upon the severity of
these symptoms, people with mild-to-moderate symptomatology of this type are said to have mixed
aphasia; global aphasia describes individuals with extensive difficulties in all language skills.
Page | 19

Arbitrariness
The absence of any necessary connection between the form of a word and its meaning. Every
language typically has a distinct word
to denote every object, activity and concept its speakers want to talk about. Each such word must be
formed in a valid manner according to the phonology of the language. But, in most cases, there is
absolutely no reason why a given meaning should be denoted by one sequence of sounds rather than
another. In practice, the particular sequence of sounds selected in a given language is completely
arbitrary: anything will do, so long as speakers agree about it. Speakers of different languages, of
course, make different choices.
A certain large snouted animal is called a pig in English, a Schwein in German, a cochon in French, a
cerdo in Spanish, a mochyn in Welsh, a txerri in Basque, a numbran in Yimas (a language of New
Guinea), and so on across the world. None of these names is more suitable than any other: each
works fine as long as speakers are in agreement. Such agreement need not be for all time. The
animal was formerly called a swine in English, but this word has dropped out of use as a name for the
animal and been replaced by pig.
Arbitrariness can be demonstrated the other way round. Many languages allow a word to have the
phonetic form [min], but there is no earthly way of predicting the meaning of this word if it should
exist. In English, [min] (spelled mean) exists and has several unrelated meanings: tight-fisted, cruel,
average, signify. French mine means (coal) mine; Welsh min is edge; Irish min is meal;
Basque min is pain; Arabic min is from. There is nothing about this sequence of sounds that makes
one meaning more likely than another. Arbitrariness is pervasive in human languages (and also in
animal communication), but there does nonetheless exist a certain amount of iconicity: cases in
which the relation between form and meaning is not totally arbitrary. Unfortunately, even with some
iconicity, it is the presence of massive arbitrariness which makes impossible the universal translator
beloved of science-fiction films, unless the machine worked by telepathy rather than linguistics.
Because of arbitrariness, even the most powerful computer program can have no way of guessing
the meaning of a word it has not encountered before. Linguists have long realized the importance of
arbitrariness, but
it was particularly stressed by the Swiss linguist Ferdinand de Saussure in the early twentieth century,
with his concept of the linguistic sign.

Attribution Theory (Weiner)


Summary: Attribution Theory attempts to explain the world and to determine the cause of an
event or behavior (e.g. why people do what they do).
Originator: Bernard Weiner (1935- )
Weiner developed a theoretical framework that has become very influential in social psychology
today. Attribution theory assumes that people try to determine why people do what they do, that
is, interpret causes to an event or behavior. A three-stage process underlies an attribution:
1. behavior must be observed/perceived
2. behavior must be determined to be intentional
3. behavior attributed to internal or external causes

Page | 20

Weiners attribution theory is mainly about achievement. According to him, the most important
factors affecting attributions are ability, effort, task difficulty, and luck. Attributions are classified
along three causal dimensions:

1. locus of control (two poles: internal vs. external)


2. stability (do causes change over time or not?)
3. controllability (causes one can control such as skills vs. causes one cannot control such as luck,
others actions, etc.)
When one succeeds, one attributes successes internally (my own skill). When a rival succeeds, one
tends to credit external (e.g. luck). When one fails or makes mistakes, we will more likely use external
attribution, attributing causes to situational factors rather than blaming ourselves. When others fail
or make mistakes, internal attribution is often used, saying it is due to their internal personality
factors.
1. Attribution is a three stage process: (1) behavior is observed, (2) behavior is determined to be
deliberate, and (3) behavior is attributed to internal or external causes.
2. Achievement can be attributed to (1) effort, (2) ability, (3) level of task difficulty, or (4) luck.
3. Causal dimensions of behavior are (1) locus of control, (2) stability, and (3) controllability.

Attrition (Language)
Language attrition is the loss of a first or second language or a portion of that language by
individuals. Speakers who routinely use more than one language may not use either of their
languages in ways which are exactly like that of a monolingual speaker. In sequential bilingualism, for
example, there is often evidence of interference from the first language (L1) in the second language
(L2) system. Describing these interference phenomena and accounting for them on the basis of
theoretical models of linguistic knowledge has long been a focus of interest of Applied Linguistics.
More recently, research has started to investigate linguistic traffic which goes the other way: L2
interferences and contact phenomena evident in the L1. Such phenomena are probably experienced
to some extent by all bilinguals. They are, however, most evident among speakers for whom a
language other than the L1 has started to play an important, if not dominant, role in everyday life
(Schmid and Kpke, 2007). This is the case for migrants who move to a country where a language is
spoken which, for them, is a second or foreign language. We refer to the phenomena of L1 change
and L2 interference which can be observed in such situations as language attrition.
The term 'First Language Attrition' (FLA) refers to the gradual decline in native language proficiency
among migrants. As a speaker uses his/her L2 frequently and becomes proficient (or even dominant)
in it, some aspects of the L1 can become subject to L2 influence or deteriorate.
L1 attrition is a process which is governed by two factors: the presence and development of the L2
system on the one hand, and the diminished exposure to and use of the L1 on the other (Schmid &
Kpke, 2007); that is, it is a process typically witnessed among migrants who use the later-learned
environmental language in daily life. The current consensus is that attrition manifests itself first and
most noticeably in lexical access and the mental lexicon (e.g. Ammerlaan, 1996; Schmid & Kpke,
2008) while grammatical and phonological representations appear more stable among speakers for
whom emigration took place after puberty (Schmid, 2009).
Attrition research has often wrestled with the problem of how to establish the border between the
normal influence of the L2 on the L1, which all bilinguals probably experience to some degree (as is
suggested by, among others, Cook 2003), and the (consquently to some degree abnormal) process
Page | 21

of L1 attrition, which is confined to migrants. It has recently been suggested that this distinction is
not only impossible to draw, but also unhelpful, as bilinguals may not have one normal language
(in which they are indistinguishable from monolinguals [...]) and one deviant one (in which
knowledge is less extensive than that of monolinguals, and also tainted by interference from L1 in
SLA and from L2 in attrition) (Schmid & Kpke 2007:3). Rather, while L1 attrition may be the most
clearly pronounced end of the entire spectrum of multicompetence, and therefore a more satisfying
object of investigation than the L1 system of a beginning L2 learner (which may not show substantial
and noticeable signs of change), attrition is undoubtedly part of this continuum, and not a discrete
and unique state of development.
Like second language acquisition (SLA), FLA is mediated by a number of external factors, such as
exposure and use (e.g. Hulsen 2000; Schmid 2007, Schmid & Dusseldorp 2009), attitude and
motivation (Ben-Rafael & Schmid 2007, Schmid 2002) or aptitude (Bylund 2008). However, the
overall impact of these factors is far less strongly pronounced than what has been found in SLA.
L1 attriters, like L2 learners, may use language differently from native speakers. In particular, they
can have variability on certain rules which native speakers apply deterministically (Sorace 2005,
Tsimpli et al. 2004). In the context of attrition, however, there is strong evidence that this optionality
is not indicative of any underlying representational deficits: the same individuals do not appear to
encounter recurring problems with the same kinds of grammatical phenomena in different speech
situations or on different tasks (Schmid 2009). This suggests that problems of L1 attriters are due to
momentary conflicts between the two linguistic systems and not indicative of a structural change to
underlying linguistic knowledge (that is, to an emerging representational deficit of any kind).
This assumption is in line with a range of investigations of L1 attrition which argue that this process
may affect interface phenomena (e.g. the distribution of overt and null subjects in pro-drop
languages) but will not touch the narrow syntax (e.g. Tsimpli et al. 2004, Montrul 2004, 2008).
Manifestations of language attrition
L1
Lexical L1 attrition
It has often been pointed out that L1 attrition usually first manifests itself in the lexicon (Schmid &
Kpke, 2008). It is possible for lexical representations in the L1 to be influenced by the semantic
potential of corresponding items in the L2. Instances of such interlanguage effects are reported by
e.g. Pavlenko (2003, 2004), who concludes that for her L1 Russian speakers, a number of Russian
terms appear to have gained a different meaning by semantic extension from their L2, English.
Secondly, it has been noted that among attriters, lexical access can become impaired, resulting in
poorer performance on picture naming tasks (Ammerlaan, 1996; Hulsen, 2000, Montrul 2008) and
reduced lexical diversity in free speech (Schmid, 2002).
Grammatical L1 attrition
Generative approaches to L1 attrition often focus on the possibility that the developing linguistic
system may show evidence of irrevocable structural changes to the actual grammar of a native
language. This was highlighted early on in the history of attrition research: It is crucial to know
whether a given example of language loss can be attributed to a change in how the relevant language
is represented in the mind of the user or to a change in the way stable knowledge (competence) is
being used. (Sharwood Smith 1983:49, his emphasis). In a similar vein, Seliger and Vago define the
object of investigation as the disintegration or attrition of the structure of a first language (L1) in
contact situations with a second language (L2) (Seliger and Vago 1991:3). However, most research
appears to indicate that attrition does not affect uninterpretable features, but that variability may be
observed in features that are interpretable at the interface levels (Tsimpli et al. 2004:274; Tsimpli
2007: 85). There therefore seems to be little evidence for an actual restructuring of the language
Page | 22

system: the narrow syntax remains unaffected, and the observed variability may be ascribed to the
cognitive demands of bilingual processing.
L2
Lambert and Moore (1986) attempted to define numerous hypotheses regarding the nature of
language loss, crossed with various aspects of language. They envisioned a test to be given to
American State Department employees that would include four linguistic categories (syntax,
morphology, lexicon, and phonology) and three skill areas (reading, listening, and speaking). A
translation component would feature on a sub-section of each skill area tested. The test was to
include linguistic features which are most difficult, according to teachers, for students to master.
Such a test may confound testing what was not acquired with what was lost. Lambert, in personal
communication with Kpke and Schmid (2004), described the results as 'not substantial enough to
help much in the development of the new field of language skill attrition'.
The use of translation tests to study language loss is inappropriate for a number of reasons: it is
questionable what such tests measure; too much variation; the difference between attriters and
bilinguals is complex; activating two languages at once may cause interference.
Yoshitomi (1992) attempted to define a model of language attrition that was related to neurological
and psychological aspects of language learning and unlearning. She discussed four possible
hypotheses and five key aspects related to acquisition and attrition. The hypotheses are:
1. Reverse order: last learned, first forgotten. Studies by Russell (1999) and Hayashi (1999) both
looked at the Japanese negation system and both found that attrition was the reverse order of
acquisition. Yoshitomi and others, including Yukawa (1998) argue that attrition can occur so rapidly,
it is impossible to determine the order of loss.
2. Inverse relation: better learned, better retained. Language items that are acquired first also
happen to be those that are most reinforced. As a result, hypotheses 1 and 2 capture the main
linguistic characteristics of language attrition (Yoshitomi, p. 297).
3. Critical period: at or around age 9. As a child grows, he becomes less able to master native-like
abilities. Furthermore, various linguistic features (for example phonology or syntax) may have
different stages or age limits for mastering. Hyltenstam & Abrahamsson (2003) argue that after
childhood, in general, it becomes more and more difficult to acquire "native-like-ness", but that
there is no cut-off point in particular. Furthermore, they discuss a number of cases where a nativelike L2 was acquired during adulthood.
4. Affect: motivation and attitude.
According to Yoshitomi, the five key aspects related to attrition are: neuroplasticity, consolidation,
permastore/savings, decreased accessibility, and receptive vs. productive abilities.
The regression hypothesis (last thing learned, first thing lost)
The regression hypothesis, first formulated by Roman Jakobson in 1941, goes back to the beginnings
of psychology and psychoanalysis. Generally speaking, it states that that which was learned first will
be retained last, both in 'normal' processes of forgetting and in pathological conditions such as
aphasia or dementia. As a template for language forgetting, the regression hypothesis has long
seemed an attractive paradigm. However, as Keijzer (2007) points out, regression is not in itself a
theoretical or explanatory framework. Both order of acquisition and order of attrition need to be put
into the larger context of linguistic theory in order to gain explanatory adequacy.
L1
Keijzer (2007) conducts a study on the L1 attrition of Dutch in anglophone Canada. Her study
compares language acquisition in children with non-pathological language attrition found in emigrant
populations and contrasts it with language use in non-attrited, mature speakers. In particular, it
Page | 23

examines the parallels and divergences between advanced stages of L1 Dutch acquisition (in
adolescents) and the L1 attrition of Dutch migrs in Anglophone Canada, as opposed to control
subjects. She finds some evidence that later-learned rules, for example with respect to diminutive
and plural formation, do indeed erode before the earlier learned information. However, there is also
considerable interaction between the first and second language, so a straightforward 'regression
pattern' cannot be observed.
L2
Citing the studies on the regression hypothesis that have been done, Yukawa (1998) says that the
results have been contradictory. It is possible that attrition is a case-by-case situation depending on a
number of variables (age, proficiency, literacy, the similarities between the L1 and L2, and whether
the L1 or the L2 is attriting). The threshold hypothesis states that there may be a level of proficiency
that once attained, enables the attriting language to remain stable.
The age effect L1 attrition
While attriters are reliably outperformed by native speakers on a range of tasks measuring overall
proficiency there is an astonishingly small range of variability and low incidence of non-targetlike use
in data even from speakers who claim not to have used their L1 for many decades (in some cases
upwards of 60 years, e.g. de Bot & Clyne 1994, Schmid 2002), provided they emigrated after puberty:
the most strongly attrited speakers still tend to compare favourably to very advanced L2 learners
(Schmid 2009). If, on the other hand, environmental exposure to the L1 ceases before puberty, the L1
system can deteriorate radically. There are few principled and systematic investigations of FLA
specifically investigating the impact of AoA. However, converging evidence suggests an age effect on
FLA which is much stronger and more clearly delineated than the effects which have been found in
SLA research. Two studies which consider pre- and postpuberty migrants (Ammerlaan 1996, AoA 029 yrs; Pelc 2001, AoA 8-32 years) find that AoA is one of the most important predictors of ultimate
proficiency, while a number of studies which investigate the impact of age among postpuberty
migrants fail to find any effect whatsoever (Kpke 1999, AoA 14-36 yrs; Schmid 2002, AoA 12-29 yrs;
Schmid 2007, AoA 17-51 yrs). A range of studies conducted by Montrul on Spanish heritage speakers
in the US as well as Spanish-English bilinguals with varying levels of AoA also suggests that the L1
system of early bilinguals may be similar to that of L2 speakers, while later learners pattern with
monolinguals in their L1 (e.g. Montrul 2008, 2009). These findings therefore indicate strongly that
early (pre-puberty) and late (post-puberty) exposure to an L2 environment have a different impact
on possible fossilization and/or deterioration of the linguistic system. A recent investigation,
focussing specifically on the age effect in L1 attrition, lends further substantiation to the assumption
of a qualitative change around puberty: Bylund (2009) investigates the L1 of 31 Spanish speakers
who emigrated to Sweden between the ages of 1 and 19 years and concludes that "there is a small
gradual decline in attrition susceptibility during the maturation period followed by a major decline at
its end (posited at around age 12)" (Bylund 2009:706). The strongest indication that an L1 can be
extremely vulnerable to attrition if exposure ceases before puberty, on the other hand, comes from a
study of Korean adoptees in France reported by Pallier (2007). This investigation could find no trace
of L1 knowledge in speakers (who had been between 3 and 10 years old when they were adopted by
French-speaking families) on a range of speech identification and recognition tasks, nor did an fMRI
study reveal any differences in brain activation when exposing these speakers to Korean as opposed
to unknown languages (Japanese or Polish). In all respects, the Korean adoptees presented in exactly
the same way as the French controls. All available evidence on the age effect for L1 attrition
therefore indicates that the development of susceptibility displays a curved, not a linear, function.
This suggests that in native language learning there is indeed a Critical Period effect, and that full
development of native language capacities necessitates exposure to L1 input for the entire duration
of this CP.
L2 attrition
Page | 24

In Hansen & Reetz-Kurashige (1999), Hansen cites her own research on L2-Hindi and Urdu attrition in
young children. As young pre-school children in India and Pakistan, the subjects of her study were
often judged to be native speakers of Hindi or Urdu; their mother was far less proficient. On return
visits to their home country, the United States, both children appeared to lose all their L2 while the
mother noticed no decline in her own L2 abilities. Twenty years later, those same young children as
adults comprehend not a word from recordings of their own animated conversations in Hindi-Urdu;
the mother still understands much of them
Yamamoto (2001) found a link between age and bilinguality. In fact, a number of factors are at play
in bilingual families. In her study, bicultural families that maintained only one language, the minority
language, in the household, were able to raise bilingual, bicultural children without fail. Families that
adopted the one parent - one language policy were able to raise bilingual children at first but when
the children joined the dominant language school system, there was a 50% chance that children
would lose their minority language abilities. In families that had more than one child, the older child
was most likely to retain two languages, if it was at all possible. Younger siblings in families with
more than two other brothers and sisters had little chance of maintaining or ever becoming bilingual.
Attrition and frequency of use
One of the basic predictions of psycholinguistic research with respect to L1 attrition is that language
loss can be attributed to language disuse (e.g. Paradis, 2007; Kpke, 2007). According to this
prediction, attrition will be most radical among those individuals who rarely or never speak their L1
in daily life, while those speakers who use the L1 regularly, for example within their family or with
friends, will to some degree be protected against its deterioration. This assumption is based on the
simple fact that rehearsal of information can maintain accessibility. The amount of use which a
potential attriter makes of her L1 strikes most researchers intuitively as one of the most important
factors in determining the attritional process (e.g. Cook 2005; Paradis, 2007). Obler (1993) believes
that less-frequently used items are more difficult to retrieve. The speed of retrieving a correct form
or the actual production of an incorrect form is not indicative of loss but may be retrieval failure
instead. In other words, what appears to be lost is in fact difficult to retrieve.
There is, however, little direct evidence that the degree to which a language system will attrite is
dependent on the amount to which the language is being used in everyday life. Two early studies
report that those subjects who used their L1 on an extremely infrequent basis showed more attrition
over time (de Bot, Gommans & Rossing 1991 and Kpke 1999). On the other hand, there is also some
evidence for a negative correlation, suggesting that the attriters who used their L1 on a daily basis
actually performed worse on some tasks (Jaspaert & Kroon 1989).
However, more recent, larger-scale studies of attrition which attempt to systematically elicit
information on language use in a large range of settings have failed to discover any strong links
between frequency of L1 use and degree of L1 attrition (Schmid 2007, Schmid & Dusseldorp, forthc.)
Motivation
Paradis (2007:128) predicts that "Motivation/affect may play an important role by influencing the
activation threshold. Thus attrition may be accelerated by a negative emotional attitude toward L1,
which will raise the L1 activation threshold. It may be retarded by a positive emotional attitude
toward L1, which will lower its activation threshold." However, the link between motivation and
attrition has been difficult to establish. Schmid & Dusseldorp (2010) fail to find any predictors for
individual performance among a large number of measures of motivation, identity, and
acculturation. Given Paradis' prediction, this finding is surprising, and may be linked to
methodological difficulties of measurement: by definition, studies of language attrition are
conducted a long time after migration has taken place (speakers investigated in such studies typically
have a period of residence in the L2 environment of several decades). The operative factor for the
degree of attrition, however, is probably the attitude towards both L1 and L2 at the beginning of this
period, when massive and intensive L2 learning is taking place, affecting the overall system of
Page | 25

multicompetence. However, attitudes are not stable and constant across a person's life, and what is
measured at the moment that the degree of individual attrition is assessed may bear very little
relation to the original feelings of the speaker. It is difficult to see how this methodological problem
can be overcome, as it is impossible for the attrition researcher to go back in time, and impractical to
measure attitudes at one point in time and attrition effects at a second point, decades later. The only
study which has made an attempt in the direction of assessing identity at the time of migration is
Schmid (2002), who investigated attrition in historical context. Her analysis of oral history interviews
with German Jews points strongly towards an important correlation of attitude and identity at the
moment of migration and eventual L1 attrition. The reasons for this pattern of L1 attrition probably
lie in a situation where the persecuted minority had the same L1 as the dominant majority, and the
L1 thus became associated with elements of identity of that dominant group. In such situations, a
symbolic link between the language and the persecuting regime can lead to a rejection of that
language. Discovering such a link and its impact on the language attrition process is extremely
complicated for most groups of migrants, since measurements are usually applied a long time after
migration took place. However, the findings presented here suggest that it is the attitude at the
moment of migration, not what is assessed several decades later, that impacts most strongly on the
attritional process.
Other studies on bilingualism and attrition
Gardner, Lalonde, & Moorcroft (1987) investigated the nature of L2-French skills attriting by L1English grade 12 students during the summer vacation, and the role played by attitudes and
motivation in promoting language achievement and language maintenance. Students who finished
the L2 class highly proficient are more likely to retain what they knew. Yet, interestingly, high
achievers in the classroom situation are no more likely to make efforts to use the L2 outside the
classroom unless they have positive attitudes and high levels of motivation. The authors write: "an
underlying determinant of both acquisition and use is motivation(p. 44).
In fact, the nature of language acquisition is still so complex and so much is still unknown, not all
students will have the same experiences during the incubation period. It is possible that some
students will appear to attrite in some areas and others will appear to attrite in other areas. Some
students will appear to maintain the level that they had previously achieved. And still, other students
will appear to improve.
Murtagh (2003) investigated retention and attrition of L2-Irish in Ireland with second level school
students . At Time 1, she found that most participants were motivated instrumentally, yet the
immersion students were most likely to be motivated integratively and they had the most positive
attitudes towards learning Irish. Immersion school students were also more likely to have
opportunities to use Irish outside the classroom/school environment. Self-reports correlated with
ability. She concludes that the educational setting (immersion schools, for example) and the use of
the language outside the classroom were the best predictors for L2-Irish acquisition. Eighteen
months later, Murtagh finds that the majority of groups 1 and 2 believe their Irish ability has attrited,
the immersion group less so. The results from the tests, however, do not show any overall attrition .
'Time' as a factor did not exert any overall significant change on the sample's proficiency in Irish
(Murtagh, p. 159).
Fujita (2002), in a study evaluating attrition among bilingual Japanese children, says that a number of
factors are seen as necessary to maintain the two languages in the returnee child. Those factors
include: age on arrival in the L2 environment, length of residence in the L2 environment, and
proficiency levels of the L1. Furthermore, she found that L2 attrition was closely related to another
factor: age of the child on returning to the L1 environment. Children returning around or before 9
were more likely to attrite than those returning later. Upon returning from overseas, pressure from
society, their family, their peers and themselves force returnee children to switch channels back to
the L1 and they quickly make effort to attain the level of native-like L1 proficiency of their peers. At
Page | 26

the same time, lack of L2 support in the schools in particular and in society in general results in an
overall L2 loss .
Sum
The loss of a native language is often experienced as something profoundly moving, disturbing or
shocking, both by those who experience it and by those who witness it in others: To lose your own
language was like forgetting your mother, and as sad, in a way, because it is like losing part of ones
soul is how Alexander McCall Smith puts it (The Full Cupboard of Life, p. 163). This intuitive appeal
of the topic of language attrition can be seen as both a blessing and a curse. Researchers who
investigate level-ordered morphology or articulatory phonetics will probably rarely find that their
friends and family, or even colleagues who work in different areas, show great interest in or
enthusiasm for their work. Those of us who study language attrition find ourselves with a much more
receptive audience. On the other hand, we also find an audience with many preconceived ideas
about attrition a much rarer problem for the level-ordered morphologist or articulatory
phonetician. Worse, we may find such notions in ourselves. Preconceived ideas are unscientific. They
are also often wrong. In order to achieve a better understanding of the process of language attrition,
scientific investigations first have to identify those areas of the L1 system which are most likely to be
affected by influence from the L2. Initially, in the absence of experimental data and evidence of L1
attrition itself, a profitable approach is to look to neighbouring areas of linguistic investigation, such
as language contact, creolisation, L2 acquisition, or aphasia. In the early years of L1 attrition
research, the 1980s, many researchers made valuable contributions of this nature, often augmented
by small-scale experiments and/or case studies (see Kpke & Schmid, 2004). A number of strong
predictions with intriguing theoretical implications were made during this period of L1 attrition
research (ibid). There are a number of factors which will impact in different ways on the process of L1
attrition. Frequent use (interactive or receptive) of a particular language may help to maintain the
native language system intact, and so may a positive attitude towards the language or the speech
community. On the other hand, none of these factors may be enough in themselves, and not all
exposure to the language may be helpful. A small, loose-knit L1 social network may even have a
detrimental effect and accelerate language change. Most importantly, however, the opportunity to
use a language and the willingness to do so are factors which interact in complex ways to determine
the process of language attrition. As yet, our understanding of this interaction is quite limited.

Autonomy (Learner)
Learner Autonomy has been a buzz word in foreign language education in the past decades,
especially in relation to life-long learning skills. It has transformed old practices in the language
classroom and has given origin to self-access language-learning centers around the world such as the
SALC at Kanda University of International Studies in Japan SALC, the SAC at Hong Kong University of
Science and Technology and ELSAC at the University of Auckland . As the result of such practices,
language teaching is now seen as language learning and it has placed the learner as the centre of
our attention in language learning education.
The term "learner autonomy" was first coined in 1981 by Henri Holec, the "father" of learner
autonomy. Many definitions have since been given to the term, depending on the writer, the
context, and the level of debate educators have come to. It has been considered as a personal
human trait, as a political measure, or as an educational move. This is because autonomy is seen
either (or both) as a means or as an end in education.
Some of the most well known definitions in present literature are:
'Autonomy is the ability to take charge of one's own learning' (Henri Holec )
'Autonomy is essentially a matter of the learner's psychological relation to the process and content of
learning' (David Little)
Page | 27

'Autonomy is a situation in which the learner is totally responsible for all the decisions concerned
with his [or her] learning and the implementation of those decisions'. (Leslie Dickinson)
'Autonomy is a recognition of the rights of learners within educational systems'. (Phil Benson)
Taken from Gardner and Miller, Establishing Self-Access from theory to practice. CUP (1999)See also
Leni Dam, who has written a seminal work on autonomy. (Dam, L. (1995) Autonomy from Theory to
Classroom Practice. Dublin: Authentik.)

One of the key aspects to consider in defining Learner Autonomy is whether we view it as a means to
an end (learning a foreign language) or as an end in itself (making people autonomous learners).
These two options do not exclude each other, both of them can be part of our views towards
language learning or learning in general.
Principles of learner autonomy could be:
Autonomy means moving the focus from teaching to learning.
Autonomy affords maximum possible influence to the learners.
Autonomy encourages and needs peer support and cooperation.
Autonomy means making use of self/peer assessment.
Autonomy requires and ensures 100% differentiation.
Autonomy can only be practised with student logbooks which are a documentation of learning
and a tool of reflection.
The role of the teacher as supporting scaffolding and creating room for the development of
autonomy is very demanding and very important.
Autonomy means empowering students, yet the classroom can be restrictive, so are the rules of
chess or tennis, but the use of technology can take students outside of the strictures of the
classroom, and the students can take the outside world into the classroom.
A New Direction in Educational Assessment
There have been numerous studies relating the conative factors associated with autonomous
learning, Reeve, J., Bolt, E., & Cai,Y. (1999) and Murdock, T. B., Anderman, L. H., & Hodge,S. A. (2000)
, to cite a few. The salient characteristics associated with autonomous learning (resourcefulness,
initiative, and persistence) are crucial for high school-level students. Currently, the school structure
in place in the US is composed of a ladder system of advancement as directed solely by academic
achievement. As students proceed up the ladder, they are exposed to ever greater needs for learner
autonomy. This increase in learner autonomy does not have a linear incremental increase
throughout the 13 grades (from K-12), but shows a dramatic increase in the transition from middle
(or junior high) school to high school. Studies suggest that students taught methods for autonomous
learning have a greater probability of succeeding in a high school setting. Further, students screened
for their level of autonomous learning perform better than those advanced simply on scholarly
achievement (Dillner, 2005) .
An instrument for assessing learner autonomy may play a significant role in determining a students
readiness for high school. Such an instrument now exists that is appropriate for the adolescent
learner. This instrument is suitable for assessing suitability for greater learner autonomy; a quality
that should be present in high school students.

Page | 28

Baby Talk Child Directed Speech


Baby talk, also referred to as caretaker speech, infant-directed talk (IDT) or child-directed speech
(CDS) and informally as "motherese", "parentese", or "mommy talk"), is a nonstandard form of
speech used by adults in talking to toddlers and infants. It is usually delivered with a "cooing"
pattern of intonation different from that of normal adult speech: high in pitch, with many glissando
variations that are more pronounced than those of normal speech. Baby talk is also characterized by
the shortening and simplifying of words. Baby talk is also used by people when talking to their
pets, and between adults as a form of affection, intimacy, bullying or condescension.
Terminology
Baby talk is a long-established and universally understood traditional term.[citation needed]
Motherese and parentese are more precise terms than baby talk, and perhaps more amenable to
computer searches, but are not the terms of choice among child development professionals (and by
critics of gender stereotyping with respect to the term motherese) because all caregivers, not only
parents, use distinct speech patterns and vocabulary when talking to young children. Motherese can
also refer to English spoken in a higher, gentler manner, which is otherwise correct English, as
opposed to the non-standard, shortened word forms.
Child-directed speech or CDS is the term preferred by researchers, psychologists and child
development professionals.
Caregiver language is also sometimes used.
Baby talk is more effective than regular speech in getting an infant's attention. Studies have shown
that infants actually prefer to listen to this type of speech. Some researchers, including Rima Shore
(1997), believe that baby talk is an important part of the emotional bonding process.
Colwyn Trevarthen studied babies and their mothers. He observed the communication and subtle
movements between the babies and mothers. He has links to music therapy with other theorists.
Aid to cognitive development
Shore and other researchers believe that baby talk contributes to mental development, as it helps
teach the child the basic function and structure of language. Studies have found that responding to
an infant's babble with meaningless babble aids the infant's development; while the babble has no
logical meaning, the verbal interaction demonstrates to the child the bidirectional nature of speech,
and the importance of verbal feedback. Some experts advise that parents should not talk to infants
and young children solely in baby talk, but should integrate some normal adult speech as well. The
high-pitched sound of motherese gives it special acoustic qualities which may appeal to the infant
(Goodluck 1991). Motherese may aid a child in the acquisition and/or comprehension of languageparticular rules which are otherwise unpredictable, when utilizing principles of universal grammar
(Goodluck 1991). Some[who?] feel that parents should refer to the child and others by their names
only (no pronouns, e.g., he, I, or you), to avoid confusing infants who have yet to form an identity
independent from their parents.

Questions regarding universality


Some researchers have pointed out that baby talk is not universal among the world's cultures, and
argue that its role in "helping children learn grammar" has been overestimated. In some societies
(such as certain Samoan tribes; see first reference) adults do not speak to their children until the
children reach a certain age. In other societies, it is more common to speak to children as one would
to an adult, but with simplifications in grammar and vocabulary. In order to relate to the child during
baby talk, a parent may deliberately slur or fabricate some words, and may pepper the speech with
nonverbal utterances. A parent might refer only to objects and events in the immediate vicinity, and
Page | 29

will often repeat the child's utterances back to them. Since children employ a wide variety of
phonological and morphological simplifications (usually distance assimilation or reduplication) in
learning speech, such interaction results in the "classic" baby-words like na-na for grandmother or
din-din for dinner, where the child seizes on a stressed syllable of the input, and simply repeats it to
form a word.
In any case, the normal child will eventually acquire the local language without difficulty, regardless
of the degree of exposure to baby talk. However, the use of motherese could have an important role
in affecting the rate and quality of language acquisition.
Use with non-infants
The use of baby talk is not limited to interactions between adults and infants, as it may be used
among adults, or by adults to animals. In these instances, the outward style of the language may be
that of baby talk, but is not considered actual parentese, as it serves a different linguistic function
(see pragmatics).
Patronizing/derogatory baby talk
Baby talk may be used by one noninfant to another as a form of verbal abuse, in which the talk is
intended to infantilize the victim. This can occur during bullying, when the bully uses baby talk to
assert that the victim is weak, cowardly, overemotional, or otherwise submissive.
Flirtatious baby talk
Baby talk may be used as a form of flirtation between sex partners. In this instance, the baby talk
may be an expression of tender intimacy, and may form part of affectionate sexual roleplaying in
which one partner speaks and behaves childishly, while the other acts motherly or fatherly,
responding in parentese. One or both partners might perform the child role.
Baby talk with pets
Many people use falsetto, glissando and repetitive speech similar to baby talk when addressing their
pets. Such talk is not commonly used by professionals who train working animals, but is very
common among owners of companion pets. This style of speech is different from baby talk, despite
intonal similarities, especially if the speaker uses rapid rhythms and forced breathiness which may
mimic the animal's utterances. Pets often learn to respond well to the emotional states and specific
commands of their owners who use baby talk, especially if the owner's intonations are very distinct
from ambient noise. For example, a dog may recognize baby talk as his owner's invitation to play (as
is a dog's natural "play bow"); a cat may learn to come when addressed with the high-pitched
utterance, "Heeeeere kitty- kitty-kitty-kitty- kitty- kitty!"
Vocabulary
As noted above, baby talk often involves shortening and simplifying words, with the possible addition
of slurred words and nonverbal utterances, and can invoke a vocabulary of its own. Some utterances
are invented by parents within a particular family unit, or passed down from parent to parent over
generations, while others are quite widely known.
A fair number of baby talk and nursery words refer to bodily functions or private parts, partly
because the words are relatively easy to pronounce. Moreover, such words reduce adults' discomfort
with the subject matter, and make it possible for children to discuss such things without breaking
adult taboos.

Bloom's Taxonomy
Blooms Taxonomy refers to a classification of the different objectives that educators set for
students (learning objectives). The taxonomy was first presented in 1956 through the publication
Page | 30

"The Taxonomy of Educational Objectives, The Classification of Educational Goals, Handbook I:


Cognitive Domain," by Benjamin Bloom (editor), M. D. Englehart, E. J. Furst, W. H. Hill, and David
Krathwohl. It is considered to be a foundational and essential element within the education
community as evidenced in the 1981 survey "Significant writings that have influenced the curriculum:
1906-1981", by H. G. Shane and the 1994 yearbook of the National Society for the Study of
Education. A great mythology has grown around the taxonomy, possibly due to many people learning
about the taxonomy through second hand information. Bloom himself considered the Handbook,
"one of the most widely cited yet least read books in American education."
Key to understanding the taxonomy and its revisions, variations, and addenda over the years is an
understanding that the original Handbook was intended only to focus on one of the three domains
(as indicated in the domain specification in title), but there was expectation that additional material
would be generated for the other domains (as indicated in the numbering of the handbook in the
title). Bloom also considered the initial effort to be a starting point, as evidenced in a memorandum
from 1971 in which he said, "Ideally each major field should have its own taxonomy in its own
language - more detailed, closer to the special language and thinking of its experts, reflecting its own
appropriate sub-divisions and levels of education, with possible new categories, combinations of
categories and omitting categories as appropriate."
Bloom's Taxonomy divides educational objectives into three "domains:" Affective, Psychomotor,
and Cognitive. Within the taxonomy learning at the higher levels is dependent on having attained
prerequisite knowledge and skills at lower levels (Orlich, et al. 2004). A goal of Bloom's Taxonomy
is to motivate educators to focus on all three domains, creating a more holistic form of education.
Skills in the affective domain describe the way people react emotionally and their ability to feel
another living thing's pain or joy. Affective objectives typically target the awareness and growth in
attitudes, emotion, and feelings.
There are five levels in the affective domain moving through the lowest order processes to the
highest:
Receiving
The lowest level; the student passively pays attention. Without this level no learning can occur.

Responding
The student actively participates in the learning process, not only attends to a stimulus; the student
also reacts in some way.
Valuing
The student attaches a value to an object, phenomenon, or piece of information.
Organizing
The student can put together different values, information, and ideas and accommodate them within
his/her own schema; comparing, relating and elaborating on what has been learned.
Characterizing
The student holds a particular value or belief that now exerts influence on his/her behaviour so that
it becomes a characteristic.
Psychomotor

Page | 31

Skills in the psychomotor domain describe the ability to physically manipulate a tool or instrument
like a hand or a hammer. Psychomotor objectives usually focus on change and/or development in
behavior and/or skills.
Bloom and his colleagues never created subcategories for skills in the psychomotor domain, but since
then other educators have created their own psychomotor taxonomies.
Cognitive
Categories in the cognitive domain of Bloom's Taxonomy (Anderson & Krathwohl, 2001)
Skills in the cognitive domain revolve around knowledge, comprehension, and critical thinking of a
particular topic. Traditional education tends to emphasize the skills in this domain, particularly the
lower-order objectives.
There are six levels in the taxonomy, moving through the lowest order processes to the highest:
Knowledge
Exhibit memory of previously-learned materials by recalling facts, terms, basic concepts and answers
Knowledge of specifics - terminology, specific facts
Knowledge of ways and means of dealing with specifics - conventions, trends and sequences,
classifications and categories, criteria, methodology
Knowledge of the universals and abstractions in a field - principles and generalizations, theories and
structures
Questions like: What are the health benefits of eating apples?
Comprehension
Demonstrative understanding of facts and ideas by organizing, comparing, translating, interpreting,
giving descriptions, and stating main ideas
Translation
Interpretation
Extrapolation
Questions like: Compare the health benefits of eating apples vs. oranges.
Application
Using new knowledge. Solve problems to new situations by applying acquired knowledge, facts,
techniques and rules in a different way
Questions like: Which kinds of apples are best for baking a pie, and why?

Analysis
Examine and break information into parts by identifying motives or causes. Make inferences and find
evidence to support generalizations
Analysis of elements
Analysis of relationships
Analysis of organizational principles
Questions like: List four ways of serving foods made with apples and explain which ones have the
highest health benefits. Provide references to support your statements.
Page | 32

Synthesis
Compile information together in a different way by combining elements in a new pattern or
proposing alternative solutions
Production of a unique communication
Production of a plan, or proposed set of operations
Derivation of a set of abstract relations
Questions like: Convert an "unhealthy" recipe for apple pie to a "healthy" recipe by replacing your
choice of ingredients. Explain the health benefits of using the ingredients you chose vs. the original
ones.
Evaluation
Present and defend opinions by making judgments about information, validity of ideas or quality of
work based on a set of criteria
Judgments in terms of internal evidence
Judgments in terms of external criteria
Questions like: Do you feel that serving apple pie for an after school snack for children is healthy?
Why or why not?
Some critiques of Bloom's Taxonomy's (cognitive domain) admit the existence of these six categories,
but question the existence of a sequential, hierarchical link. Also the revised edition of Bloom's
taxonomy has moved Synthesis in higher order than Evaluation. Some consider the three lowest
levels as hierarchically ordered, but the three higher levels as parallel. Others say that it is sometimes
better to move to Application before introducing concepts[citation needed]. This thinking would
seem to relate to the method of problem-based learning.

Bottom-Up Approach
There comes a point at which understanding and studying language from a top-down perspective is
just about impossible without a good understanding of the details. In this way, a bottom-up
approach to foreign language study is necessary. This is not easy, however. In some instances, it
requires looking closely at grammatical forms that are used less frequently and that might be more
complicated. It also requires learning and teaching vocabulary with a lexical approach. A bottom-up
approach to ESL/EFL learning is further complicated by reduced forms, or reductions, which occur in
the everyday speech of people whose first language is English. Students might feel that English is
quite mysterious and complex. I think it is not often enough that a bottom-up approach is used in the
ESL/EFL classroom. The opportunity for a bottom-up approach to learning and teaching English might
not present itself in all its forms in the first place. One has to allow bottom-up to present itself. This
will begin to assist students in unlocking the mystery. The ESL/EFL classroom can, at times, shelter
students from the reality of how English is used. Guidance from an ESL/EFL teacher with an objective
viewpoint and whose first language is English is, therefore, indispensable to the ESL/EFL learner.
Taking a bottom-up approach to ESL/EFL learning and teaching can help unravel some of the
complexities involved in becoming a proficient speaker and listener of English as a second or foreign
language. In order to go beyond the high-intermediate level in ESL/EFL studies, a bottom-up
approach to learning and teaching is absolutely necessary. Still, one should take note that students at
lower levels can also become frustrated by not understanding everything in a given text. Ultimately,
it's the teacher's responsibility to show ESL/EFL students how they can take a bottom-up approach to
their independent study of English. Of course, students have to do their part as well. The student
who wants to progress will take on this responsibility. Part of teaching a language is showing how to
Page | 33

learn a language; part of learning a language is understanding that a great deal of learning occurs
outside of the classroom by means of independent study, speaking practice, and exposure to the
language.
One has to know where to begin and what to learn next. At a certain point, the secret of what to
learn next lies in a bottom-up approach to language study. There is a lot of focus on grammar, and
that's necessary. Grammatical accuracy is important to the student who aspires to reach a high level
of proficiency. However, language is complex, and grammar alone will not allow one to rise to the
true level of understanding that one might desire. ESL/EFL Grammar books present forms separately
and apart from one another. While this form of learning and teaching is practical and necessary, it is
in opposition to how language is really used and understood. Texts which show grammar in context
must be brought into the ESL/EFL classroom at all levels. At the beginner level, this is easier said than
done. At more advanced levels, formal language cannot be fully processed and understood without
showing it side by side with informal language. The distinction between formal language and
informal language is not always clear; the lines are sometimes blurred. It is not always easy to say
what formal language is and what informal language is. Authentic texts are necessary in order to
understand this.
While it may challenge and frustrate some learners, the introduction of texts which are more
complicated is necessary if one is to bring bottom-up learning and teaching to the ESL/EFL
classroom. One could say that more complicated texts are, in fact, authentic texts. It might be
impossible to understand a text using a top-down approach. The proficient speaker of English, or
speaker whose first language is English, more often employs a top-down approach to listening and
understanding in real-life circumstances, whereas the less proficient ESL/EFL learner might become
easily frustrated at what seems to be his or her lack of ability to do the same. The teacher should
then ask: what impedes understanding? Certainly, one's level of expression is limited by one's ability
to understand. However, a higher level of understanding does not guarantee a higher level of
expression. Automatic processing is necessary to achieve a higher level of expression. Automatic
processing becomes more accessible to one who aspires to reach a high level of proficiency as one
gets a handle on more of the details and intricacies of English. It is difficult to progress if one is
continually burdened by having to analyze the language in one's environment in order to understand
it and respond to it. Language can be complex and full of subtleties. Bottom-up is the way forward if
one is to reach a high level of proficiency as a speaker, listener, writer, and reader of English. The
teacher should not only be the students' guide to learning and understanding, but should also show
students how they can be their own guides to learning and understanding.
Students: Do you require a high level of proficiency? If so, do you know where to begin?
Teachers: Do you understand what your students require?

Page | 34

Clinical Supervision
What is Clinical Supervision?
In its simplest form, clinical supervision includes:
a) a conference with the student teacher to proview objectives and the lesson plan,
b) direct observation of the lesson, and
c) a follow-up conference with the student, with feedback on strengths and areas of improvement.
What is the theory behind clinical supervision?
A successful student teaching experience is the keystone of pre-service teacher preparation. Clinical
supervision is a means of ensuring that student teaching is carried out most effectively through
systematic planning, observation, and feedback. The clinical supervision model is designed to help
teachers grow. It systematically builds on strengths while eliminating counterproductive approaches.
It recognizes that each student teacher is different. No candidate will be a carbon copy of "the ideal
teacher" or teach every lesson exactly according to a given model. Clinical supervision develops the
student teacher's ability to reflect on experience and apply principles and concepts to selfimprovement efforts.
Three actors must play their roles well for clinical supervision to succeed. The student teacher must
plan lessons early enough that the supervisor and master teacher can review them before class. The
master teacher and supervisor must find compatible strategies in supporting the student teacher.
When clinical supervision is a cooperative endeavor, the results can be rewarding to all.
What is the process of clinical supervision?
Clinical supervision is a continuous series of cycles in which the supervisor assists the student teacher
in developing ever more successful instructional strategies (not necessarily the mentor's pet
methods). The approach was first published by Goldhammer (1969) and Cogan (1973) but effective
supervisors had been using similar methods for some time. In Clinical Supervision: A State of the Art
Review (ASCD, 1980), Cheryl Sullivan describes clinical supervision as an eight-phase cycle of
instructional improvement:
1) Supervisor establishes the clinical relationship with the teacher by explaining the purpose and
sequence of clinical supervision. (No secrets; this is not "snoopervision.")
2) Planning of the lesson(s), either independently by the student teacher or jointly.
3) Discussion/evaluation of the lesson plan.
4) Observation of the lesson, recording of appropriate data.
5) Teacher and supervisor analyze the teaching/learning process, especially "critical incidents and
pattern analysis." Questions are preferable to lectures: "Why do you think the students started to
talk when you . . . ?" (Step 5 should follow the observation as soon as possible so that both
participants have a clear recollection of what happened.)
6) Teacher makes decisions about his/her behavior and the students' behaviors and learning.
7) Supervisor and teacher decide on changes sought in the teacher's behavior, then create a plan for
implementing the changes.
8) Arrangements for the next pre-observation conference.
Obviously these may be modified, but the significant elements of each should be addressed. The
process is assessment-formative, not summary-evaluative. Subsequent evaluation may be based on
progress toward the goals set in the post-observation conference.
What do supervisors look for in the classroom?
Page | 35

In the pre-observation conference, the objectives of the lesson will be established. During
observation, the supervisor will record performance and pupil response in relation to the objectives.
The supervisor will also note critical incidents that impact on teaching effectiveness. In the preobservation conference, the student teacher may ask the supervisor to watch for particular things.
The supervisor may want to delay discussing all the negative aspects of the student teacher's
performance in the first session. A person can handle only so much criticism at one time. This is
especially true when using videotaped reviews, where the student teacher may see many things that
are wrong. The role of the supervisor is to support the teacher and point out what went well rather
than dwelling too long on the errors. As student teaching progresses, and the major problems have
been addressed, there will be time to introduce secondary considerations.
There are many methods for recording observations. Ned Flanders' interaction analysis methodology
is appropriate. One useful book is Keith Acheson and Meredith Gall's Techniques in the Clinical
Supervision of Teachers (Longman, 1987).
Which teaching models are appropriate?
When clinical supervision was first introduced, participants said, "Fine. Now what should I look for as
the critical incidents of teaching?" Since direct instruction is the most common teaching strategy, and
since Madeline Hunter's seven-step model is an effective direct-instruction model, the Hunter model
was provided. However, some supervisors tried to make it fit where it wasn't appropriate, resulting
in an undeserved negative reputation for clinical supervision. The Association of California School
Administrators (ACSA), recognizing this problem, published A Practical Guide for Instructional
Supervision: A Tool for Administrators and Supervisors. This excellent handbook contains ten models
describing a range of the more useful teaching strategies. Bruce Joyce and Marsha Weil's Models of
Teaching (Prentice-Hall, 1987) describes more than eighty strategies.
More than 130 lesson plan models have been identified, each appropriate to certain teaching
objectives. The California Department of Education identifies these models within four families (same
as Joyce and Weil's, with different titles):
Behavioral Transmits the culture by teaching skills and knowledge. Strategies: direct instruction,
written language.
Social Interaction Teaches social skills and communication. Strategies: cooperative learning, group
discussion, total physical response.
Generative Develops internal resources to see things in new ways. Strategies: brainstorming,
synectics.
Cognitive Improves logical thinking processes, develops thoughtful citizens through critical thinking.
Strategies: concept attainment inquiry, math problem solving.
The supervisor should determine whether the model chosen is appropriate for the student teacher's
objectives

Cognitive Linguistics
Cognitive Linguistics grew out of the work of a number of researchers active in the 1970s who were
interested in the relation of language and mind, and who did not follow the prevailing tendency to
explain linguistic patterns by means of appeals to structural properties internal to and specific to
language. Rather than attempting to segregate syntax from the rest of language in a 'syntactic
component' governed by a set of principles and elements specific to that component, the line of
research followed instead was to examine the relation of language structure to things outside
language: cognitive principles and mechanisms not specific to language, including principles of

Page | 36

human categorization; pragmatic and interactional principles; and functional principles in general,
such as iconicity and economy.
The most influential linguists working along these lines and focusing centrally on cognitive principles
and organization were Wallace Chafe, Charles Fillmore, George Lakoff, Ronald Langacker, and
Leonard Talmy. Each of these linguists began developing their own approach to language description
and linguistic theory, centered on a particular set of phenomena and concerns. One of the important
assumptions shared by all of these scholars is that meaning is so central to language that it must be
a primary focus of study. Linguistic structures serve the function of expressing meanings and hence
the mappings between meaning and form are a prime subject of linguistic analysis. Linguistic forms,
in this view, are closely linked to the semantic structures they are designed to express. Semantic
structures of all meaningful linguistic units can and should be investigated.
These views were in direct opposition to the ideas developing at the time within Chomskyan
linguistics, in which meaning was 'interpretive' and peripheral to the study of language. The central
object of interest in language was syntax. The structures of language were in this view not driven by
meaning, but instead were governed by principles essentially independent of meaning. Thus, the
semantics associated with morphosyntactic structures did not require investigation; the focus was on
language-internal structural principles as explanatory constructs.
Functional linguistics also began to develop as a field in the 1970s, in the work of linguists such as
Joan Bybee, Bernard Comrie, John Haiman, Paul Hopper, Sandra Thompson, and Tom Givon. The
principal focus of functional linguistics is on explanatory principles that derive from language as a
communicative system, whether or not these directly relate to the structure of the mind. Functional
linguistics developed into discourse-functional linguistics and functional-typological linguistics, with
slightly different foci, but broadly similar in aims to Cognitive Linguistics. At the same time, a
historical linguistics along functional principles emerged, leading to work on principles of
grammaticalization (grammaticization) by researchers such as Elizabeth Traugott and Bernd Heine.
All of these theoretical currents hold that language is best studied and described with reference to its
cognitive, experiential, and social contexts, which go far beyond the linguistic system proper.
Other linguists developing their own frameworks for linguistic description in a cognitive direction in
the 1970s were Sydney Lamb (Stratificational Linguistics, later Neurocognitive Linguistics) and Dick
Hudson (Word Grammar).
Much work in child language acquisition in the 1970s was influenced by Piaget and by the cognitive
revolution in Psychology, so that the field of language acquisition had a strong functional/cognitive
strand through this period that persists to the present. Work by Dan Slobin, Eve Clark, Elizabeth
Bates and Melissa Bowerman laid the groundwork for present day cognitivist work.
Also during the 1970s, Chomsky made the strong claim of innateness of the linguistic capacity leading
to a great debate in the field of acquisition that still reverberates today. His idea of acquisition as a
'logical problem' rather than an empirical problem, and view of it as a matter of minor parametersetting operations on an innate set of rules, were rejected by functionally and cognitively oriented
researchers and in general by those studying acquisition empirically, who saw the problem as one of
learning, not fundamentally different from other kinds of learning.
By the late 1980s, the kinds of linguistic theory development being done in particular by Fillmore,
Lakoff, Langacker, and Talmy, although appearing radically different in the descriptive mechanisms
proposed, could be seen to be related in fundamental ways. Fillmore's ideas had developed into
Frame Semantics and, in collaboration with others, Construction Grammar (Fillmore et al. 1988).
Lakoff was well-known for his work on metaphor and metonymy (Lakoff 1981 and Lakoff 1987).
Langacker's ideas had evolved into an explicit theory known first as Space Grammar and then
Cognitive Grammar (Langacker 1988). Talmy had published a number of increasingly influential
papers on linguistic imaging systems (Talmy 1985a,b and 1988).
Page | 37

Also by this time, Gilles Fauconnier had developed a theory of Mental Spaces, influenced by the
views of Oswald Ducrot. This theory was later developed in collaboration with Mark Turner into a
theory of Conceptual Blending, which meshes in interesting ways with both Langacker's Cognitive
Grammar and Lakoff's theory of Metaphor.
The 1980s also saw the development of connectionist models of language processing, such as those
developed by Jeff Elman and Brian MacWhinney, in which the focus was on modeling learning,
specifically language acquisition, using connectionist networks. This work tied naturally in to the
acquisition problem, and with the research program of Elizabeth Bates who had demonstrated the
learned nature of children's linguistic knowledge, and its grounding in cognitive and social
development. Gradually, a coherent conceptual framework emerged which exposed the flaws of
linguistic nativism and placed experiential learning at the center in the understanding of how
children acquire language. This conception was the foundation for the research program of Michael
Tomasello, who in the 1990s began to take the lead in the study of acquisition in its social, cognitive,
and cultural contexts.
Through the 1980s the work of Lakoff and Langacker, in particular, began to gain adherents. During
this decade researchers in Poland, Belgium, Germany, and Japan began to explore linguistic problems
from a cognitive standpoint, with explicit reference to the work of Lakoff and Langacker. 1987 saw
the publication of Lakoff's infuential book Women, Fire and Dangerous Things, and, at almost the
same time, Langacker's 1987 Foundations of Cognitive Grammar Vol. 1, which had been circulating
chapter by chapter since 1984.
The next publication milestone was the collection Topics in Cognitive Linguistics, ed. by Brygida
Rudzka-Ostyn, published by Mouton in 1988. This substantial volume contains a number seminal
papers by Langacker, Talmy, and others which made it widely influential, and indeed of influence
continuing to this day.
In 1989, the first conference on Cognitive Linguistics was organized in Duisburg, Germany, by Rene
Dirven. At that conference, it was decided to found a new organization, the International Cognitive
Linguistic Association, which would hold biennial conferences to bring together researchers working
in cognitive linguistics. The Duisburg conference was retroactively declared the first International
Cognitive Linguistics Conference (see ICLA Organization History).
The journal Cognitive Linguistics was also conceived in the mid 1980s, and its first issue appeared in
1990 under the imprint of Mouton de Gruyter, with Dirk Geeraerts as editor.
At the Duisburg conference, Rene Dirven proposed a new book series, Cognitive Linguistics Research,
as another publication venue for the developing field. The first CLR volume, a collection of articles by
Ronald Langacker, brought together under the title Concept, Image and Symbol, came out in 1990.
The following year, Volume 2 of Langacker's Foundations of Cognitive Grammar appeared.
During the 1990s Cognitive Linguistics became widely recognized as an important field of
specialization within Linguistics, spawning numerous conferences in addition to the biennial ICLC
meetings. The work of Lakoff, Langacker, and Talmy formed the leading strands of the theory, but
connections with related theories such as Construction Grammar were made by many working
cognitive linguists, who tended to adopt representational eclecticism while maintaining basic tenets
of cognitivism. Korea, Hungary, Thailand, Croatia, and other countries began to host cognitive
linguistic research and activities. The breadth of research could be seen in the journal Cognitive
Linguistics which had become the official journal of the ICLA. Arie Verhagen took over as editor,
leading the journal into its second phase.
By the mid-1990s, Cognitive Linguistics as a field was characterized by a defining set of intellectual
pursuits practiced by its adherents, summarized in the Handbook of Pragmatics under the entry for
Cognitive Linguistics (Geeraerts 1995: 111-112):

Page | 38

Because cognitive linguistics sees language as embedded in the overall cognitive capacities of man,
topics of special interest for cognitive linguistics include: the structural characteristics of natural
language categorization (such as prototypicality, systematic polysemy, cognitive models, mental
imagery and metaphor); the functional principles of linguistic organization (such as iconicity and
naturalness); the conceptual interface between syntax and semantics (as explored by cognitive
grammar and construction grammar); the experiential and pragmatic background of language-in-use;
and the relationship between language and thought, including questions about relativism and
conceptual universals.
In this summary, the strong connections between Cognitive Linguistics and the research areas of
functional linguistics, linguistic description, psycholinguistics, pragmatics, and discourse studies can
be seen.
For many cognitive linguists, the main interest in CL lies in its provision of a better-grounded
approach to and set of theoretical assumptions for syntactic and semantic theory than generative
linguistics provides. For others, however, an important appeal is the opportunity to link the study of
language and the mind to the study of the brain.
In the 2000s regional Cognitive Linguistics Associations, affiliated to ICLA, began to emerge. Spain,
Finland, Poland, Russia and Germany became the sites of new CLAs; a Slavic CLA was based in North
America. Currently new Associations are being formed, serving France, Japan, Belgium, the United
Kingdom, and North America. A review journal, the Annual Review of Cognitive Linguistics began its
run. Adele Goldberg took over as editor of Cognitive Linguistics and continued to increase the
journal's reputation and prominence in Linguistics.
Cognitive linguistics conferences continue to be organized in many countries, to the extent that it is
difficult to keep track of them all. The ICLC was held for the first time in Asia, specifically in Seoul,
Korea in July 2005. Asia has a now very significant membership base. The ICLA hopes that in the not
too distant future, another Asian country will host the ICLC, such as China.
The ICLA continues to foster the development of Cognitive Linguistics as a worldwide discipline, and
to enhance its connection with its natural neighbor disciplines of Psychology, Anthropology,
Sociology, and of course Cognitive Science.

Cognitive Linguistics II
In linguistics and cognitive science, cognitive linguistics (CL) refers to the school of linguistics that
understands language creation, learning, and usage as best explained by reference to human
cognition in general. It is characterized by adherence to three central positions. First, it denies that
there is an autonomous linguistic faculty in the mind; second, it understands grammar in terms of
conceptualization; and third, it claims that knowledge of language arises out of language use.
Cognitive linguists deny that the mind has any module for language-acquisition that is unique and
autonomous. This stands in contrast to the work done in the field of generative grammar. Although
cognitive linguists do not necessarily deny that part of the human linguistic ability is innate, they
deny that it is separate from the rest of cognition. Thus, they argue that knowledge of linguistic
phenomena i.e., phonemes, morphemes, and syntax is essentially conceptual in nature.
Moreover, they argue that the storage and retrieval of linguistic data is not significantly different
from the storage and retrieval of other knowledge, and use of language in understanding employs
similar cognitive abilities as used in other non-linguistic tasks.
Departing from the tradition of truth-conditional semantics, cognitive linguists view meaning in
terms of conceptualization. Instead of viewing meaning in terms of models of the world, they view it
in terms of mental spaces.

Page | 39

Finally, cognitive linguistics argues that language is both embodied and situated in a specific
environment. This can be considered a moderate offshoot of the Sapir-Whorf hypothesis, in that
language and cognition mutually influence one another, and are both embedded in the experiences
and environments of its users.
Areas of study
Cognitive linguistics is divided into three main areas of study:
Cognitive semantics, dealing mainly with lexical semantics
Cognitive approaches to grammar, dealing mainly with syntax, morphology and other traditionally
more grammar-oriented areas.
Cognitive phonology.
Aspects of cognition that are of interest to cognitive linguists include:
Construction grammar and cognitive grammar.
Conceptual metaphor and conceptual blending.
Image schemas and force dynamics.
Conceptual organization: Categorization, Metonymy, Frame semantics, and Iconicity.
Construal and Subjectivity.
Gesture and sign language.
Linguistic relativity.
Cognitive neuroscience.
Related work that interfaces with many of the above themes:
Computational models of metaphor and language acquisition.
Psycholinguistics research.
Conceptual semantics, pursued by generative linguist Ray Jackendoff is related because of its active
psychological realism and the incorporation of prototype structure and images.
Cognitive linguistics, more than generative linguistics, seeks to mesh together these findings into a
coherent whole. A further complication arises because the terminology of cognitive linguistics is not
entirely stable, both because it is a relatively new field and because it interfaces with a number of
other disciplines.

Insights and developments from cognitive linguistics are becoming accepted ways of analysing
literary texts, too. Cognitive Poetics, as it has become known, has become an important part of
modern stylistics. The best summary of the discipline as it is currently stands is Peter Stockwell's
Cognitive Poetics.

Communication Strategy
A way used to express a meaning in a second or foreign language, by a learner who has a limited
command of the language. In trying to communicate, a learner may have to make up for a lack of
knowledge of grammar or vocabulary. For example the learner may not be able to say its against the
law to park here and so he/she may say this place, cannot park. For handkerchief a learner could say
a cloth for my nose, and for apartment complex the learner could say building. The use of paraphrase

Page | 40

and other communication strategies (e.g. gesture and mime) characterize the interlanguage of some
language learners.

Communicative Competence
It is the knowledge of not only if something is formally possible in a language, but also the knowledge
of whether it is feasible, appropriate, or done in a particular SPEECH COMMUNITY. Communicative
competence includes:
a. grammatical competence (also formal competence), that is knowledge of the grammar,
vocabulary, phonology, and semantics of a language
b. sociolinguistic competence (also sociocultural competence), that is, knowledge of the relationship
between language and its nonlinguistic context, knowing how to use and respond appropriately to
different types of speech acts, such as requests, apologies, thanks, and invitations, knowing which
address forms should be used with different persons one speaks to and in different situations, and
so forth
c. discourse competence (sometimes considered part of sociolinguistic competence), that is knowing
how to begin and end conversations
d. strategic competence, that is, knowledge of communcaton strateges that can compensate for
weakness in other areas.

Competence / Performance
Competence is the implicit system of rules that constitutes apersons knowledge of a language. This
includes a persons ability to create and understand sentences, including sentences they have never
heard before, knowledge of what are and what are not sentences of a particular language, and the
ability to recognize ambiguous and deviant sentences. For example, a speaker of English would
recognize I want to go home as an English sentence but would not accept a sentence such as I want
going home even though all the words in it are English words. Competence often refers to an ideal
speaker/hearer, that is an idealized but not a real person who would have a complete knowledge of
the whole language. A distinction is made between competence and PERFORMANCE, which is the
actual use of the language by individuals in speech and writing.
Performance is persons actual use of language. A difference is made between a persons knowledge
of a language (COMPETENCE) and how a person uses this knowledge in producing and understanding
sentences erformance). For example, people may have the competence to produce an infinitely long
sentence but when they actually attempt to use this knowledge (to perform) there are many
reasons why they restrict the number of adjectives, adverbs, and clauses in any one sentence. They
may run out of breath, or their listeners may get bored or forget what has been said if the sentence is
too long. In second and foreign language learning, a learners performance in a language is often
taken as an indirect indication of his or her competence, although other indexes such as
GRAMMATICALITY judgements are sometimes considered a more direct measure of competence.
There is also a somewhat different way of using the term performance. In using language, people
often make errors (see SPEECH ERRORS). These may be due to performance factors such as fatigue,
lack of attention, excitement, nervousness. Their actual use of language on a particular occasion may
not reflect their competence. The errors they make are described as examples of performance.

Comprehensible input (Krashen)

Page | 41

According to the comprehensible input hypothesis (originally called the input hypothesis), we acquire
language only when we receive comprehensible input (CI). This hypothesis developed by Stephen
Krashen is one of the most prominent modern theories in the fields of first language acquisition and
second language acquisition (SLA).
If (i) represents previously acquired linguistic competence and extra-linguistic knowledge, the
hypothesis claims that we move from i to i+1 by understanding input that contains i+1. Extralinguistic knowledge includes our knowledge of the world and of the situation, that is, the context.
The +1 represents new knowledge or language structures that we should be ready to acquire.
The comprehensible input hypothesis can be restated in terms of the natural order hypothesis. For
example, if we acquire the rules of language in a linear order (1, 2, 3...), then i represents the last rule
or language form learned, and i+1 is the next structure that should be learned. It must be stressed
however, that just any input is not sufficient, the input received must be comprehensible. According
to Krashen, there are three corollaries to his theory.
Corollaries of the input/comprehension hypothesis

1. Talking (output) is not practicing


Krashen stresses yet again that speaking in the target language does not result in language
acquisition. Although speaking can indirectly assist in language acquisition, the ability to speak is not
the cause of language learning or acquisition. Instead, comprehensible output is the result of
language acquisition.
2. When enough comprehensible input is provided, i+1 is present
That is to say, that if language models and teachers provide enough comprehensible input, then
the structures that acquirers are ready to learn will be present in that input. According to Krashen,
this is a better method of developing grammatical accuracy than direct grammar teaching.
3. The teaching order is not based on the natural order
Instead, students will acquire the language in a natural order by receiving comprehensible
input.
Applications in second language teaching
Beginning level
Class time is filled with comprehensible oral input
Teachers must modify their speech so that it is comprehensible
Demands for speaking (output) are low; students are not forced to speak until ready
Grammar instruction is only included for students high school age and older
Intermediate level'
Sheltered subject-matter teaching that uses modified academic texts to provide comprehensible
input.
Sheltered subject matter teaching is not for beginners or native speakers of the target language.
In sheltered instruction classes, the focus is on the meaning, and not the form.

Connectionism

Page | 42

A theory in COGNITIVE SCIENCE that assumes that the individual components of human cognition are
highly interactive and that knowledge of events, concepts and language is represented diffusely in
the cognitive system. The theory has been applied to models of speech processing, lexical
organization, and first and second language learning. Connectionism provides mathematical models
and computer simulations that try to capture both the essence of INFORMATION PROCESSING and
thought processes. The basic assumptions of the theory are:
1 Information processing takes place through the interactions of a large number of simple units,
organized into networks and operating in parallel.
2 Learning takes place through the strengthening and weakening of the interconnections in a
particular network in response to examples encountered in the INPUT.
3 The result of learning is often a network of simple units that acts as though it knows abstract
rules, although the rules themselves exist only in the form of association strengths distributed across
the entire network.
Connectionism is sometimes referred to as parallel distributed processing (PDP) or neural networks.
There are slight differences among these terms, and over time connectionism has come to be viewed
as the most general term.

Constructivism
Constructivism is a social and educational philosophy based on the beliefs that:
1 knowledge is actively constructed by learners and not passively received
2 cognition is an adaptive process that organizes the learners experiential world.
3 all knowledge is socially constructed
Constructivists believe that there are no enduring, context-free truths, that researcher BIAS cannot
be eliminated, that multiple, socially constructed realities can only be studied holistically rather than
in pieces, and that the possibility of generalizing from one research site to another is limited.
Learning is seen as involving reorganization and reconstruction and it is through these processes that
people internalize knowledge and perceive the world. In language teaching, constructivism has led to
a focus on learning strategies, learner beliefs, teacher thinking and other aspects of learning which
stress the individual and personal contributions of learners to learning. A constructivist view of
teaching involves teachers in making their own sense of their classrooms and taking on the role of a
reflective practitioner.

Contrastive Analysis
CA is the comparison of the linguistic systems of two languages, for example the sound system or the
grammatical system. Contrastive analysis was developed and practiced in the 1950s and 1960s, as an
application of STRUCTURAL LINGUISTICS to language teaching, and is based on the following
assumptions:
a. the main difficulties in learning a new language are caused by interference from the first language
b. these difficulties can be predicted by contrastive analysis.
c. teaching materials can make use of contrastive analysis to reduce the efects of interference.
Contrastive analysis was more successful in PHONOLOGY than in other areas of language, and
declined in the 1970s as interference was replaced by other explanations of learning difficulties. In

Page | 43

recent years contrastive analysis has been applied to other areas of language, for example the
discourse systems.

Conversational Maxim
Conversational Maxim is an unwritten rule about conversation which people know and which
influences the form of conversational exchanges. For example in the following exchange:

a: Lets go to the movies.


b: I have an examination in the morning.

Bs reply might appear not to be connected to As remark. However, since A has made an invitation
and since a reply to an invitation is usually either an acceptance or a refusal, Bs reply is here
understood as an excuse for not accepting the invitation (i.e. a refusal). B has used the maxim that
speakers normally give replies which are relevant to the question that has been asked. The
philosopher Grice has suggested that there are four conversational maxims:

a. The maxim of quantity: give as much information as is needed.


b. The maxim of quality: speak truthfully.
c. The maxim of relevance: say things that are relevant.
d. The maxim of manner: say things clearly and briefly.

The use of conversational maxims to imply meaning during conversation is called conversational
implicature, and the co-operation between speakers in using the maxims is sometimes called the
co-operative principle.

Co-Operative Learning
Co-operative Learning is an approach to teaching and learning in which classrooms are organized so
that students work together in small co-operative teams. Such an approach to learning is said to
increase students learning since (a) it is less threatening for many students, (b) it increases the
amount of student participation in the classroom, (c) it reduces the need for competitiveness, and (d)
it reduces the teachers dominance in the classroom. Five distinct types of co-operative learning
activities are often distinguished:

1 Peer Tutoring: students help each other learn, taking turns tutoring or drilling each other.
2 Jigsaw: each member of a group has a piece of information needed to complete a group task.
3 Co-operative Projects: students work together to produce a product, such as a written paper or
group presentation.
4 Co-operative/Individualized: students progress at their own rate through individualized learning
materials but their progress contributes to a team grade so that each pupil is rewarded by the
achievements of his or her teammates.

Page | 44

5 Co-operative Interaction: students work together as a team to complete a learning unit, such as a
laboratory experiment.

Critical Linguistics
CL is an approach to the analysis of language and of language use that focuses on the role that
language plays in assigning power to particular groups within society. Critical linguistics is based on
the study of texts and the way texts are interpreted and used. The assumption is the relation
between form and function in discourse is not arbitrary or conventional but is determined by
cultural, social, and political factors, i.e. that texts are inherently ideological in nature.

Critical Pedagogy
Critical pedagogy is a teaching approach grounded in critical theory. Critical pedagogy attempts to
help students question and challenge domination, and the beliefs and practices that dominate. In
other words, it is a theory and practice of helping students achieve critical consciousness. Critical
pedagogue Ira Shor defines critical pedagogy as
"Habits of thought, reading, writing, and speaking which go beneath surface meaning, first
impressions, dominant myths, official pronouncements, traditional clichs, received wisdom, and
mere opinions, to understand the deep meaning, root causes, social context, ideology, and personal
consequences of any action, event, object, process, organization, experience, text, subject matter,
policy, mass media, or discourse."
Critical pedagogy includes relationships between teaching and learning. It is a continuous process
of unlearning, learning and relearning, reflection, evaluation and the impact that these actions
have on the students, in particular students who have been historically and continue to be
disenfranchised by traditional schooling.
Background
Critical pedagogy was heavily influenced by the works of Paulo Freire, arguably the most celebrated
critical educator. According to his writings, Freire heavily endorses students ability to think critically
about their education situation; this way of thinking allows them to "recognize connections between
their individual problems and experiences and the social contexts in which they are embedded."
Realizing ones consciousness ("conscientization") is a needed first step of "praxis," which is defined
as the power and know-how to take action against oppression while stressing the importance of
liberating education. "Praxis involves engaging in a cycle of theory, application, evaluation, reflection,
and then back to theory. Social transformation is the product of praxis at the collective level."
Postmodern, anti-racist, feminist, postcolonial, and queer theories all play a role in further explaining
Freires ideas of critical pedagogy, shifting its main focus on social class to include issues pertaining to
religion, military identification, race, gender, sexuality, nationality, ethnicity, and age. Many
contemporary critical pedagogues have embraced postmodern, anti-essentialist perspectives of the
individual, of language, and of power, "while at the same time retaining the Freirean emphasis on
critique, disrupting oppressive regimes of power/knowledge, and social change." Contemporary
critical educators, such as bell hooks appropriated by Peter McLaren, discuss in their criticisms the
influence of many varied concerns, institutions, and social structures, "including globalization, the
mass media, and race/spiritual relations," while citing reasons for resisting the possibilities to
change.
Joe L. Kincheloe and Shirley R. Steinberg have created the Paulo and Nita Freire Project for
International Critical Pedagogy at McGill University. In line with Kincheloe and Steinberg's
contributions to critical pedagogy, the project attempts to move the field to the next phase of its
Page | 45

evolution. In this second phase critical pedagogy seeks to truly become a worldwide, decolonizing
movement dedicated to listening to and learning from diverse discourses from peoples around the
planet. Kincheloe and Steinberg are intent on not allowing critical pedagogy to become merely a
North American phenomenon or a patriarchal one. In this listening and introspective phase critical
pedagogy becomes better equipped to engage diverse peoples facing different forms of oppression
in emancipatory experiences. Taking a cue from Sandy Grande and her discussion in Red Pedagogy of
the fruitful negotiation between indigenous peoples and critical pedagogy, Kincheloe and Steinberg
envision such dialogue with peoples around the world.
Critical Period Hypothesis
The hypothesis claims that a first language can only be acquired during the first few years of life.
Young children learn perfectly any language to which they are adequately exposed, and they do
this without explicit teaching. Few adults can perform the same feat. In the 1960s the American
neurologist Eric Lenneberg proposed an explanation: we are born with a singular ability to learn
languages, but this ability is shut down, probably by some genetic programming, at around age
thirteen, the cut-off age for first-language acquisition.
Strong support for Lennebergs hypothesis comes from the observation of feral children: children
who, for some reason, have been denied normal access to language in early life. In the eighteenth
century, a young teenage French boy later named Victor was discovered living wild. He had no
language and failed to learn much after being taken into care. More recently, a French girl known as
Isabelle and an American girl known as Genie were prevented by psychopathic parents from
hearing any language. After discovery and rescue, Isabelle, who was six, learned French rapidly, and
quickly caught up with other children of her age, but Genie, nearly fourteen when discovered,
never learned more than a minimal amount of English, in spite of intensive therapy. An American
woman known as Chelsea was born nearly deaf, but was misdiagnosed as mentally retarded. Only at
age thirty-one was she correctly diagnosed and given a hearing aid; she then began learning English
but she too never made more than minimal progress. (Synaptic Pruning)

Page | 46

Deficit Hypothesis
The hypothesis suggests that a group of speakers has an inadequate command of grammar and
vocabulary to express complex ideas. In the 1960s, the British educational theorist Basil Bernstein
proposed that a given language can be regarded as possessing two fundamentally different styles,
or codes. A restricted code, in this view, has; a limited vocabulary and a limited range of
grammatical constructions; it is adequate for talking to people with very similar backgrounds about
everyday experiences, but it is highly inexplicit and depends for success upon a large degree of
shared experience. It is too inexplicit and too limited to express complex and unfamiliar ideas in a
coherent manner. An elaborated code, in contrast, possesses a large vocabulary and a wide range
of grammatical constructions, and it is entirely suitable for communicating complex ideas, in a fully
explicit manner, to people who do not share the speakers background. Bernsteins deficit
hypothesis holds that, while middle-class children have full control over both codes, working-class
children have access only to the restricted code. Hence working-class children cannot communicate
effectively in the manner expected in educational institutions, and so they cannot hope to succeed
in schools which are largely predicated on elaborated code. This hypothesis has generated a storm
of discussion and debate. Linguists, led by William Labov, have mostly been critical and dismissive
of it. They defend instead the difference hypothesis by which working-class speech is merely
different from middle-class speech and not inferior to it in expressiveness and, hence, that
working-class children in school are penalized only for being different, and not for being
incompetent. The notion of deficit has been debated also in relation to linguistic differences ascribed
to gender and ethnicity.

Diachronic Linguistics
The branch of linguistics concerned with the study of phonological, grammatical, and semantic
changes, the reconstruction of earlier stages of languages, and the discovery and application of the
methods by which genetic relationships among languages can be demonstrated. Historical linguistics
had its roots in the etymological speculations of classical and medieval times, in the comparative
study of Greek and Latin developed during the Renaissance and in the speculations of scholars as to
the language from which the other languages of the world were descended. It was only in the 19th
century, however, that more scientific methods of language comparison and sufficient data on the
early Indo-European languages combined to establish the principles now used by historical linguists.
The theories of the Neogrammarians, a group of German historical linguists and classical scholars
who first gained prominence in the 1870s, were especially important because of the rigorous manner
in which they formulated sound correspondences in the Indo-European languages. In the 20th
century, historical linguists have successfully extended the application of the theories and methods
of the 19th century to the classification and historical study of non-Indo-European languages.
Historical linguistics, when contrasted with synchronic linguistics, the study of a language at a
particular point in time, is often called diachronic linguistics.

Discourse Analysis
Discourse analysis (DA), or discourse studies, is a general term for a number of approaches to
analyzing written, spoken or signed language use.
Discourse analysis is the branch of linguistics that deals with the study and application of approaches
to analyse written, spoken or signed language.
The objects of discourse analysisdiscourse, writing, talk, conversation, communicative event, etc.
are variously defined in terms of coherent sequences of sentences, propositions, speech acts or
turns-at-talk. Contrary to much of traditional linguistics, discourse analysts not only study language
Page | 47

use 'beyond the sentence boundary', but also prefer to analyze 'naturally occurring' language use,
and not invented examples. This is known as corpus linguistics; text linguistics is related.
Discourse analysis has been taken up in a variety of social science disciplines, including linguistics,
sociology, anthropology, social work, cognitive psychology, social psychology, international relations,
human geography, communication studies and translation studies, each of which is subject to its own
assumptions, dimensions of analysis, and methodologies. Sociologist Harold Garfinkel was another
influence on the discipline: see below.
History
The term discourse analysis (DA) first came into general use following the publication of a series of
papers by Zellig Harris beginning in 1952 and reporting on work from which he developed
transformational grammar in the late 1930s. Formal equivalence relations among the sentences of a
coherent discourse are made explicit by using sentence transformations to put the text in a canonical
form. Words and sentences with equivalent information then appear in the same column of an array.
This work progressed over the next four decades (see references) into a science of sublanguage
analysis (Kittredge & Lehrberger 1982), culminating in a demonstration of the informational
structures in texts of a sublanguage of science, that of immunology, (Harris et al. 1989) and a fully
articulated theory of linguistic informational content (Harris 1991). During this time, however, most
linguists pursued a succession of elaborate theories of sentence-level syntax and semantics.
Although Harris had mentioned the analysis of whole discourses, he had not worked out a
comprehensive model, as of January, 1952. A linguist working for the American Bible Society, James
A. Lauriault/Loriot, needed to find answers to some fundamental errors in translating Quechua, in
the Cuzco area of Peru. He took Harris's idea, recorded all of the legends and, after going over the
meaning and placement of each word with a native speaker of Quechua, was able to form logical,
mathematical rules that transcended the simple sentence structure. He then applied the process to
another language of Eastern Peru, Shipibo. He taught the theory in Norman, Oklahoma, in the
summers of 1956 and 1957 and entered the University of Pennsylvania in the interim year. He tried
to publish a paper Shipibo Paragraph Structure, but it was delayed until 1970 (Loriot & Hollenbach
1970). In the meantime, Dr. Kenneth L. Pike, a professor at University of Michigan, Ann Arbor, taught
the theory, and one of his students, Robert E. Longacre, was able to disseminate it in a dissertation.
Harris's methodology was developed into a system for the computer-aided analysis of natural
language by a team led by Naomi Sager at NYU, which has been applied to a number of sublanguage
domains, most notably to medical informatics. The software for the Medical Language Processor is
publicly available on SourceForge.
In the late 1960s and 1970s, and without reference to this prior work, a variety of other approaches
to a new cross-discipline of DA began to develop in most of the humanities and social sciences
concurrently with, and related to, other disciplines, such as semiotics, psycholinguistics,
sociolinguistics, and pragmatics. Many of these approaches, especially those influenced by the social
sciences, favor a more dynamic study of oral talk-in-interaction.
Mention must also be made of the term "Conversational analysis", which was influenced by the
Sociologist Harold garfinkel who is the founder of Ethnomethodology.
In Europe, Michel Foucault became one of the key theorists of the subject, especially of discourse,
and wrote The Archaeology of Knowledge.
Topics of interest
Topics of discourse analysis include:
The various levels or dimensions of discourse, such as sounds (intonation, etc.), gestures, syntax,
the lexicon, style, rhetoric, meanings, speech acts, moves, strategies, turns and other aspects of
Page | 48

interaction Genres of discourse (various types of discourse in politics, the media, education,
science, business, etc.)
The relations between discourse and the emergence of syntactic structure
The relations between text (discourse) and context
The relations between discourse and power
The relations between discourse and interaction
The relations between discourse and cognition and memory

Perspectives
The following are some of the specific theoretical perspectives and analytical approaches used in
linguistic discourse analysis:
Emergent grammar
Text grammar (or 'discourse grammar')
Cohesion and relevance theory
Functional grammar
Rhetoric
Stylistics (linguistics)
Interactional sociolinguistics
Ethnography of communication
Pragmatics, particularly speech act theory
Conversation analysis
Variation analysis
Applied linguistics
Cognitive psychology, often under the label discourse processing, studies the production and
comprehension of discourse.
Discursive psychology
Response based therapy (counselling)
Critical discourse analysis
Sublanguage analysis

Although these approaches emphasize different aspects of language use, they all view language as
social interaction, and are concerned with the social contexts in which discourse is embedded.
Often a distinction is made between 'local' structures of discourse (such as relations among
sentences, propositions, and turns) and 'global' structures, such as overall topics and the schematic
organization of discourses and conversations. For instance, many types of discourse begin with some
kind of global 'summary', in titles, headlines, leads, abstracts, and so on.

Critical Discourse Analysis


Page | 49

Critical Discourse Analysis (CDA) is an interdisciplinary approach to the study of discourse that
views language as a form of social practice and focuses on the ways social and political domination
are reproduced by text and talk.
Since Norman Fairclough's Language and Power in 1989, CDA has been deployed as a method of
analysis throughout the humanities and social sciences. It is neither a homogeneous nor
necessarily united approach. Nor does it confine itself only to method. The single shared
assumption uniting CDA practitioners is that language and power are entirely linked.
Background
CDA was first developed by Norman Fairclough. The approach draws from several disciplines in the
humanities and social sciences, such as critical linguistics. Fairclough developed a three-dimensional
framework for studying discourse, where the aim is to map three separate forms of analysis onto one
another: analysis of (spoken or written) language texts, analysis of discourse practice (processes of
text production, distribution and consumption) and analysis of discursive events as instances of
sociocultural practice.
In addition to linguistic theory, the approach draws from social theory and contributions from Karl
Marx, Antonio Gramsci, Louis Althusser, Jrgen Habermas, Michel Foucault and Pierre Bourdieu in
order to examine ideologies and power relations involved in discourse. Language connects with the
social through being the primary domain of ideology, and through being a site of, and a stake in,
struggles for power. Ideology has been called the basis of the social representations of groups, and,
in psychological versions of CDA developed by Teun A. van Dijk and Ruth Wodak, there is assumed to
be a sociocognitive interface between social structures and discourse structures. The historical
dimension in critical discourse studies also plays an important role.
Methodology
Although CDA is sometimes mistaken to represent a 'method' of discourse analysis, it is generally
agreed upon that any explicit method in discourse studies, the humanities and social sciences may be
used in CDA research, as long as it is able to adequately and relevantly produce insights into the way
discourse reproduces (or resists) social and political inequality, power abuse or domination. That is,
CDA does not limit its analysis to specific structures of text or talk, but systematically relates these to
structures of the sociopolitical context.

Page | 50

Experiential Learning
Experiential learning is the process of making meaning from direct experience. Aristotle once said,
"For the things we have to learn before we can do them, we learn by doing them." David A. Kolb
helped to popularize the idea of experiential learning drawing heavily on the work of John Dewey
and Jean Piaget. His work on experiential learning has contributed greatly to expanding the
philosophy of experiential education.
Experiential learning is learning through reflection on doing, which is often contrasted with rote or
didactic learning. Experiential learning is related to, but not synonymous with, experiential
education, action learning, adventure learning, free choice learning, cooperative learning, and service
learning. While there are relationships and connections between all these theories of education,
importantly they are also separate terms with separate meanings.
Experiential learning focuses on the learning process for the individual (unlike experiential education,
which focuses on the transactive process between teacher and learner). An example of experiential
learning is going to the zoo and learning through observation and interaction with the zoo
environment, as opposed to reading about animals from a book. Thus, one makes discoveries and
experiments with knowledge firsthand, instead of hearing or reading about others' experiences.
Experiential learning requires no teacher and relates solely to the meaning making process of the
individual's direct experience. However, though the gaining of knowledge is an inherent process that
occurs naturally, for a genuine learning experience to occur, there must exist certain elements.
According to David Kolb, an American educational theorist, knowledge is continuously gained
through both personal and environmental experiences. He states that in order to gain genuine
knowledge from an experience, certain abilities are required:

1. the learner must be willing to be actively involved in the experience;


2. the learner must be able to reflect on the experience;
3. the learner must possess and use analytical skills to conceptualize the experience; and
4. the learner must possess decision making and problem solving skills in order to use the new
ideas gained from the experience.
For the adult learner especially, experience becomes a "living textbook" to which they can refer.
However, as John Dewey pointed out, experiential learning can often lead to "mis-educative
experiences." In other words, experiences do not automatically equate learning. The classic example
of this is the lecture experience many students have in formal educational settings. While the
content of the course might be "physics" the experiential learning becomes "I hate physics."
Preferably, the student should have learned "I hate lectures." Experiential learning therefore can be
problematic as generalizations or meanings may be misapplied. Without continuity and interaction,
experience may actually distort educational growth and disable an otherwise capable learner. There
are countless examples of this in prejudice, stereotypes, and other related areas.
Implementation
Experiential learning can be a highly effective educational method. It engages the learner at a more
personal level by addressing the needs and wants of the individual. Experiential learning requires
qualities such as self-initiative and self-evaluation. For experiential learning to be truly effective, it
should employ the whole learning wheel, from goal setting, to experimenting and observing, to
reviewing, and finally action planning. This complete process allows one to learn new skills, new
attitudes or even entirely new ways of thinking.
Simple games, such as hopscotch, can teach many valuable academic and social skills, like team
management, communication, and leadership. The reason why games are popular as experiential
Page | 51

learning techniques is because of the "fun factor" - learning through fun helps the learner to retain
the lessons for a longer period.
Most educators understand the important role experience plays in the learning process. A fun
learning environment, with plenty of laughter and respect for the learner's abilities, also fosters an
effective experiential learning environment. It is vital that the individual is encouraged to directly
involve themselves in the experience, in order that they gain a better understanding of the new
knowledge and retain the information for a longer time. As stated by the ancient Chinese
philosopher, Confucius, "[t]ell me and I will forget, show me and I may remember, involve me and I
will understand.
According to D'Jungle People Experiential Learning Consultants Malaysia, experiential learning is
about creating an experience where learning can be facilitated. How do you create a well-crafted
learning experience? The key lies in the facilitator and how he or she facilitates the learning process.
An excellent facilitator believes in the creed, "You teach some by what you say, teach more by what
you do, but most of all, you teach most by who you are." And while it is the learner's experience that
is most important to the learning process, it is also important not to forget the wealth of experience
a good facilitator also brings to the situation.
An effective experiential facilitator is one who is passionate about his or her work and is able to
immerse participants totally in the learning situation, allowing them to gain new knowledge from
their peers and the environment created. These facilitators stimulate the imagination, keeping
participants hooked on the experience.
Sudbury model of democratic education schools assert that much of the learning going on in their
schools, including values, justice, democracy, arts and crafts, professions, and frequently academic
subjects, is done by learning through experience.
Comparisons
Experiential learning is most easily compared with academic learning, the process of acquiring
information through the study of a subject without the necessity for direct experience. While the
dimensions of experiential learning are analysis, initiative, and immersion, the dimensions of
academic learning are constructive learning and reproductive learning. Though both methods aim at
instilling new knowledge in the learner, academic learning does so through more abstract, classroom
based techniques, whereas experiential learning actively involves the learner in a concrete
experience.

Page | 52

Formative Assessment
Formative assessment is a self-reflective process that intends to promote student attainment. Cowie
and Bell define it as the bidirectional process between teacher and student to enhance, recognize
and respond to the learning. Black and Wiliam consider an assessment formative when the
feedback from learning activities is actually used to adapt the teaching to meet the learner's needs.
Nicol and Macfarlane-Dick have re-interpreted research on formative assessment and feedback and
shown how these processes can help students take control of their own learning (self-regulated
learning).
In the training field, formative assessment is described as assessing the formation of the student.
Facilitators do this by observing students as they:
Respond to questions
Ask questions
Interact with other students during activities, etc.
This enables the facilitator to evaluate own delivery, fog index and relevance of content.
Formative Assessments - Chronology and Intent
Michael Scriven (1967) coined the terms formative and summative evaluation and emphasized their
differences both in terms of the goals of the information they seek and how the information is used.
Benjamin Bloom (1968) just a year later made formative assessments a keystone of Learning for
Mastery. He, along with Thomas Hasting and George Madaus (1971) produced the Handbook of
Formative and Summative Evaluation and showed how formative assessments could be linked to
instructional units in a variety of content areas. Almost 20 years ago, the Kentucky high-stakes
assessment (Kifer, 1994) initially included a major emphasis on instructionally embedded tests; i.e.,
formative assessments.
Formative assessment has a long history. Formative assessments have evolved as a means to adapt
to student needs. Historically formative assessments were of instructional units and diagnostic
assessments were used for placement purposes. Formative assessments are part of instruction
designed to provide crucial feedback for teachers and students. Assessment results inform the
teacher of what has been taught well and not so well. They inform students of what they have
learned well and not learned so well. As opposed to a summative assessments designed to make
judgments about student performance and produce grades, the role of a formative assessment is to
improve learning. As opposed to benchmark tests that are used to predict student performance on
other tests (most often state assessments), formative assessments are intimately connected to
instruction.
Formative assessments are: For Learning The purpose of formative assessment is to enhance
learning not to allocate grades. Summative assessments are designed to allocate grades. The goal of
formative assessment is to improve; summative assessment to prove. Embedded in Instruction Formative assessments are considered a part of instruction and the instructional sequence. What
students are taught is reflected in what they are assessed.
They produce: Non-threatening Results - Formative assessments are scored but not graded. Students
mark their own work and are encouraged to raise questions about the assessment and the material
covered by the assessment. Direct and Immediate Feedback- Results of formative assessments are
produced on the spot; teachers and students get them immediately. Teachers get a view of both
individual and class performances while students learn how well they have done. Structured
Information - Teachers can judge success and plan improvements based on the formative results.
Students can see progress and experience success. Both teachers and students learn from the
assessment results. Ways to Improve - Summarized formative results provide a basis for the teacher
Page | 53

to re-visit topics in the unit if necessary. Individual student responses provide a basis for giving
students additional experiences in areas where they performed less well.
Formative assessment is more valuable for day-to-day teaching when it is used to adapt the
teaching to meet students needs. Formative assessment helps teachers to monitor their students
progress and to modify the instruction accordingly. It also helps students to monitor their own
progress as they get feedback from their peers and the teacher. Students also find opportunity to
revise and refine their thinking by means of formative assessment. Formative assessment is also
called as educative assessment and classroom assessment. Formative Assessments are available
online. See Online Formative Assessment
Methods of Formative Assessment: There are many ways to integrate formative assessment into K12 classrooms. Although the key concepts of formative assessment such as constant feedback,
modifying the instruction, and information about students' progress do not vary among different
disciplines or levels, the methods or strategies may differ. For example, researchers developed
generative activities (Stroup et al., 2004) and model-eliciting activities (Lesh et al., 2000) that can be
used as formative assessment tools in mathematics and science classrooms. Others developed
strategies computer-supported collaborative learning environments (Wang et al., 2004b). More
information about implication of formative assessment in specific areas is given below.
Purpose of Formative Assessment: The following are examples of application of formative
assessment to content areas:

Functions of Language
It refers to the various purposes to which language may be put. We often tend to assume that the
function of language is communication, but things are more complicated than that. Language serves
a number of diverse functions, only some of which can reasonably be regarded as communicative.
Here are some of the functions of language which we can distinguish:
1 We pass on factual information to other people.
2 We try to persuade other people to do something.
3 We entertain ourselves or other people.
4 We express our membership in a particular group.
5 We express our individuality.
6 We express our moods and emotions.
7 We maintain good (or bad) relations with other people.
8 We construct mental representations of the world.
All of these functions are important, and it is difficult to argue that some of them are more
important, or more primary, than others. For example, studies of conversations in pubs and bars
have revealed that very little information is typically exchanged on these occasions and that the
social functions are much more prominent. Of course, a university lecture or a newspaper story will
typically be very different.
This diversity of function has complicated the investigation of the origin and evolution of language.
Many particular hypotheses about the origin of language have tended to assume that just one of
these diverse functions was originally paramount, and that language came into being specifically to
serve that one function. Such assumptions are questionable, and hence so are the hypotheses based
upon them.

Page | 54

Proponents of functionalism are often interested in providing classifications of the functions of


languages or texts; see under Systemic Linguistics for a well-known example.

Page | 55

Generative Grammar
It is a grammar of a particular language which is capable of defining all and only the grammatical
sentences of that language. The notion of generative grammar was introduced by the American
linguist Noam Chomsky in the 1950s, and it has been deeply influential. Earlier approaches to
grammatical description had focused on drawing generalizations about the observed sentences of a
language. Chomsky proposed to go further: once our generalizations are accurate and complete, we
can turn them into a set of rules which can then be used to build up complete grammatical sentences
from scratch. A generative grammar is mechanical and mindless; once constructed, it requires no
further human intervention. The rules of the grammar, if properly constructed, automatically define
the entire set of the grammatical sentences of the language, without producing any ungrammatical
garbage. Since the number of possible sentences in any human language is infinite, and since we do
not want to write an infinitely long set of rules, a successful generative grammar must have the
property of recursion: a single rule must be allowed to apply over and over in the construction of a
single sentence.
Chomsky himself defined several quite different types of generative grammar, and many other types
have more recently been defined by others. A key characteristic of any generative grammar is its
power: the larger the number of different kinds of grammatical phenomena the grammar can handle
successfully, the more powerful is the grammar. But and this is a fundamental point we do not
want our grammars to have limitless power. Instead, we want our grammars to be just powerful
enough to handle successfully the things that actually happen in languages, but not powerful enough
to handle things that do not happen in languages.
Within certain limits, all the different kinds of generative grammar can be arranged in a hierarchy,
from least powerful to most powerful; this arrangement is called the Chomsky hierarchy. The goal of
Chomskys research programme, then, is to identify that class of generative grammars which
matches the observed properties of human languages most perfectly. If we can do that, then the
class of generative grammars we have identified must provide the best possible model for the
grammars of human languages.
Two of the most important classes of generative grammars so far investigated are (context-free)
phrase structure grammar and transformational grammar. The second is far more powerful than the
first and arguably too powerful to serve as an adequate model for human languages while the
first is now known to be just slightly too weak (and has been modified). (Special note: in recent years,
Chomsky and his followers have been applying the term generative grammar very loosely to the
framework called Government-and-Binding Theory [GB], but it should be borne in mind that GB is
not strictly a generative grammar in the original sense of the term, since it lacks the degree of
rigorous formal under-pinning which is normally considered essential in a generative grammar.)

Government-And-Binding Theory
A particular theory of grammar, the descendant of transformational grammar. During the 1960s and
1970s, Noam Chomskys transformational grammar went through a number of substantial revisions.
In 1980, Chomsky gave a series of lectures in Pisa outlining a dramatic revision of his ideas; these
lectures were published in 1981 as a book, Lectures on Government and Binding. The new
framework presented became known as the Government-and-Binding Theory (GB) or as the
Principles-and-Parameters approach.
GB represents a great departure from its transformational ancestors; while it still retains a single
transformational rule, the framework is so different from what preceded it that the name
transformational grammar is not normally applied to it.

Page | 56

As the alternative name suggests, GB is based squarely upon two ideas. First, the grammars of all
languages are embedded in a universal grammar, conceived as a set of universal principles applying
equally to the grammar of every language. Second, within universal grammar, the grammars of
particular languages may differ only in small and specified respects; these possible variations are
conceived as parameters, and the idea is that the grammar of any single language will be
characterized by the use of a particular setting for each one of these parameters. The number of
available settings for each parameter is small, usually only two or three.
GB is a modular framework. Its machinery is divided up into about eight distinct modules, or
components. Each of these modules is responsible for treating different aspects of sentence
structure, and each is subject to its own particular principles and constraints.
A sentence structure is well formed only if it simultaneously meets the independent requirements of
every one of the modules. Two of those modules those treating government and binding (the
possibility that two noun phrases in a sentence refer to the same entity) give GB its name.
Just like transformational grammar, GB sees every sentence as having both an abstract underlying
structure (the former deep structure, now renamed D-structure) and a superficial structure (the
former surface structure, now renamed S-structure). There is also a third level of representation,
called logical form (LF). Certain requirements apply to each one of these three levels, while further
requirements apply to the way in which the three of them are related.
The motivation for all this, of course, is the hope of reducing the grammars of all languages to
nothing more than minor variations upon a single theme, the unvarying principles of universal
grammar. But the task is far from easy, and Chomsky, confronted by recalcitrant data, has been
forced into the position of claiming that the grammar of every language consists of two quite
different parts: a core which alone is subject to the principles of universal grammar and a
periphery consisting of miscellaneous language specific statements not subject to universal
principles. This ploy has been seen by critics as a potentially catastrophic retreat from the whole
basis of the Chomskyan research programme.
GB was an abstract framework to begin with, but it has become steadily more abstract, as its
proponents, confronted by troublesome data, have tended to posit ever greater layers of
abstraction, in the hope of getting their universal principles to apply successfully at some level of
representation. Critics have not been slow to see this retreat into abstraction as a retreat from the
data altogether, that is as an attempt to shoehorn the data into a priori principles which themselves
are sacrosanct. The more outspoken critics have declared the GB framework to be more a religious
movement than an empirical science. Nevertheless, GB has for years been by far the most influential
and widely practiced theory of grammar in existence.
Recently, however, Chomsky has, to general surprise, initiated the Minimalist Program (original US
spelling), in which almost all of the elaborate machinery of GB is rejected in favor of a very
different approach.

Page | 57

Hockett's 13 Design Features of Language


Vocal-Auditory Channel: Much of human language is performed using the vocal tract and auditory
channel. Hockett viewed this as an advantage for human primates because it allowed for the ability
to participate in other activities while simultaneously communicating through spoken language.
Broadcast transmission and directional reception: All human language can be heard if it is within
range of another persons auditory channel. Additionally, a listener has the ability to determine the
source of a sound by binaural direction finding.
Rapid Fading (transitoriness): Wave forms of human language dissipate over time and do not persist.
A hearer can only receive specific auditory information at the time it is spoken.
Interchangeability: A person has the ability to both speak and hear the same signal. Anything that a
person is able to hear, they have the ability to reproduce through spoken language.
Total Feedback: A speaker has the ability to hear themselves speak. Through this, they are able to
monitor their speech production and internalize what they are producing through language.
Specialization: Human language sounds are specialized for communication. When dogs pant it is to
cool themselves off, when humans speak it is to transmit information.
Semanticity: This refers to the idea that specific signals can be matched with a specific meaning.
Arbitrariness: There is no limitation to what can be communicated about and there is no specific or
necessary connection between the sounds used and the message being sent.
Discreteness: Phonemes can be placed in distinct categories which differentiate them from one
another, such as the distinct sound of /p/ versus /b/.
Displacement: The ability to refer to things in space and time and communicate about things that are
currently not present.
Productivity: The ability to create new and unique meanings of utterances from previously existing
utterances and sounds.
Traditional Transmission: The idea that human language is not completely innate and acquisition
depends in part on the learning of a language.
Duality of patterning: Meaningless phonic segments (phonemes) are combined to make meaningful
words, which in turn are combined again to make sentences.
While Hockett believed that all communication systems, animal and human alike, share many of
these features, only human language contains all of the 13 design features. Additionally, traditional
transmission, and duality of patterning are key to human language.
Hockett's Design Features and their implications for human language
Hockett suggests that the importance of a vocal-auditory channel lies in the fact that the animal can
communicate while also performing other tasks, such as eating, or using tools.
Broadcast Transmission and Directional Reception: An auditory/audible human language signal is
sent out in all directions, but is perceived in a limited direction. For example, humans are more
proficient in determining the location of a sound source when the sound is projecting directly in front
of them as opposed to a sound source projected directly behind them.
Rapid Fading of a signal in human communication differs from such things as animal tracks and
written language because an utterance does not continue to exist after it has been broadcast. With
this in mind, it is important to note that Hockett viewed spoken language as the primary concern for
investigation. Written language was seen as being secondary due to its recent evolution in culture.

Page | 58

Interchangeability represents a human's ability to act out or reproduce any linguistic message that
they are able to comprehend. This differs from many animal communication systems, particularly in
regards to mating. For example, humans have the ability to say and do anything that they feel may
benefit them in attracting a mate. Sticklebacks on the other hand have different male and female
courtship motions; a male cannot replicate a female's motions and vice versa.
Total Feedback is important in differentiating a human's ability to internalize their own productions
of speech and behavior. This design-feature incorporates the idea that humans have insight into their
actions.
Specialization is apparent in the anatomy of human speech organs and our ability to exhibit some
control over these organs. For example, a key assumption in the evolution of language is that the
descent of the larynx has allowed humans to produce speech sounds. Additionally, in terms of
control, humans are generally able to control the movements of their tongue and mouth. Dogs
however, do not have control over these organs. When dogs pant they are communicating a signal,
but the panting is an uncontrollable response reflex of being hot.
Semanticity: A specific signal can be matched with a specific meaning within a particular language
system. For example, all people that understand English have the ability to make a connection
between a specific word and what that word represents or refers to. (Hockett notes that gibbons also
show semanticity in their signals, however their calls are far broader than human language.)
Arbitrariness within human language suggests that there is no direct connection between the type of
signal (word) and what is being referenced. For example, an animal as large as a cow can be referred
to by a very short word.
Discreteness: Each basic unit of speech can be categorized and is distinct from other categories. In
human language there are only a small set of sound ranges that are used and the differences
between these bits of sound are absolute. In contrast, the waggle dance of honeybees is continuous.
Displacement refers to the human language system's ability to communicate about things that are
not present spatially, temporally, or realistically. For example, humans have the ability to
communicate about unicorns and outer space.
Productivity: human language is open and productive in the sense that humans have the ability to
say things that have never before been spoken or heard. In contrast, apes such as the gibbon have a
closed communication system because all of their vocal sounds are part of a finite repertoire of
familiar calls.
Traditional Transmission:: suggests that while certain aspects of language may be innate, humans
acquire words and their native language from other speakers. This is different from many animal
communication systems because most animals are born with the innate knowledge and skills
necessary for survival. (Example: Honeybees have an inborn ability to perform and understand the
waggle dance).
Duality of patterning: Humans have the ability to recombine a finite set of phonemes to create an
infinite number of words, which in turn can be combined to make an unlimited number of different
sentences.

Page | 59

Innateness Hypothesis
The hypothesis suggesting that children are born knowing what human languages are like. It is
obvious that particular languages are not innate and must be learned. Any child, regardless of
ethnic background, will learn perfectly whatever language it is exposed to, and an isolated child
prevented from any exposure to language will learn no language at all. Nevertheless, modern
linguists are often impressed by the striking resemblances among languages all over the globe. In
spite of the obvious and seemingly dramatic differences among them, linguists are increasingly
persuading themselves that the observed degree of variation in language structures is much less than
we might have guessed in advance, and hence that there are important universal properties shared
by all languages.
In the 1960s, the American linguist Noam Chomsky put forward a bold hypothesis to explain this
apparent universality: according to his innateness hypothesis, a number of important characteristics
of language are built into our brains at birth, as part of our genetic endowment, and hence we are
born already knowing what a human language can be like. In this view, then, learning a particular
language is merely a matter of learning the details which distinguish that language from other
languages, while the universal properties of languages are already present and need not be learned.
The innateness hypothesis was controversial from the start, and a number of critics, among them
philosophers and psychologists, took vigorous issue with Chomskys position, arguing that there is no
evidence for innate linguistic knowledge, and that the acquisition of a first language could be
satisfactorily explained in terms of the all-purpose cognitive faculties which the child uses to acquire
other types of knowledge about the world. This controversy reached a head in 1975, when Chomsky
debated the issue with one of his most distinguished critics, the Swiss psychologist Jean Piaget.
Chomsky and his supporters have responded in several ways. First, they attempt to point to
identifiable universal properties of language, what they call universal grammar (itself a deeply
controversial notion); these properties they claim to be arbitrary, unexpected and in no way
deducible from general cognitive principles.
Second, they point out that children never make certain types of errors which we might have
expected. For example, having learned The dog is hungry, they can produce They dog looks
hungry., yet, having learned Susie is sleeping, they never produce Susie looks sleeping. Third, they
invoke the poverty of the stimulus. By this term they mean that the data available to the child are
quite inadequate to account for the knowledge which the child eventually acquires. For example,
the usual rules of question formation in English seem to predict that a statement like The girls who
were throwing snowballs have been punished should have a corresponding question What have
the girls who were throwing been punished? In fact, every English-speaker knows that this is
impossible, and no child or adult ever tries to construct such questions.
However, there seems to be no way that this constraint could possibly be inferred from what the
child hears, and Chomsky therefore invokes a universal principle, supported by comparable data
from other languages, which he takes as part of our innate linguistic endowment.

Interlanguage
An interlanguage or, more explicitly, interim language is an emerging linguistic system that has been
developed by a learner of a second language (or L2) who has not become fully proficient yet but is
only approximating the target language: preserving some features of their first language (or L1) in
speaking or writing the target language and creating innovations. An interlanguage is idiosyncratically
based on the learners' experiences with the L2. It can fossilize in any of its developmental stages. The
interlanguage consists of: L1 transfer, transfer of training, strategies of L2 learning (e.g.
simplification), strategies of L2 communication (e.g. do not think about grammar while talking), and
overgeneralization of the target language patterns.
Page | 60

Interlanguage is based on the theory that there is a "psychological structure latent in the brain"
which is activated when one attempts to learn a second language. Larry Selinker proposed the theory
of interlanguage in 1972, noting that in a given situation the utterances produced by the learner are
different from those native speakers would produce had they attempted to convey the same
meaning. This comparison reveals a separate linguistic system. This system can be observed when
studying the utterances of the learners who attempt to produce a target language norm.
To study the psychological processes involved one should compare the interlanguage of the learner
with two things:
Utterances in the native language to convey the same message made by the learner
Utterances in the target language to convey the same message made by the native speaker of that
language.
Interlanguage yields new linguistic variety, as features from a group of speakers' L1 community may
be integrated into a dialect of the speaker's L2 community. Interlanguage is in itself the basis for
diversification of linguistic forms through an outside linguistic influence. Dialects formed by
interlanguage are the product of a need to communicate between speakers with varying linguistic
ability, and with increased interaction with a more standard dialect, are often marginalized or
eliminated in favor of a standard dialect. In this way, interlanguage may be thought of as a temporary
tool in language or dialect acquisition.

Interlanguage Fossilization
Interlanguage fossilization is a stage during second language acquisition. When mastering a target
language (TL), second language (L2) learners develop a linguistic system that is self-contained and
different from both the learners first language (L1) and the TL (Nemser, 1971). This linguistic system
has been variously called interlanguage (IL) (Selinker, 1972), approximative system (Nemser, 1971),
idiosyncratic dialects or transitional dialects (Corder, 1971), etc.
According to Corder (1981), this temporary and changing grammatical system, IL, which is
constructed by the learner, approximates the grammatical system of the TL. In the process of L2
acquisition, IL continually evolves into an ever-closer approximation of the TL, and ideally should
advance gradually until it becomes equivalent, or nearly equivalent, to the TL. However, during the
L2 learning process, an IL may reach one or more temporary restricting phases when its development
appears to be detained (Nemser, 1971; Selinker, 1972; Schumann, 1975). A permanent cessation of
progress toward the TL has been referred to as fossilization (Selinker, 1972). This linguistic
phenomenon, IL fossilization, can occur despite all reasonable attempts at learning (Selinker, 1972).
Fossilization includes those items, rules, and sub-systems that L2 learners tend to retain in their IL,
that is, all those aspects of IL that become entrenched and permanent, and that the majority of L2
learners can only eliminate with considerable effort (Omaggio, 2001). Moreover, it has also been
noticed that this occurs particularly in adult L2 learners IL systems (Nemser, 1971; Selinker, 1972,
Selinker & Lamendella, 1980).
Selinker (1972) suggests that the most important distinguishing factor related to L2 acquisition is the
phenomenon of fossilization. However, both his explanation that fossilizable linguistic phenomena
are linguistic items, rules, and subsystems which speakers of a particular native language will tend to
keep in their interlanguage relative to a particular target language, no matter what the age of the
learner or amount of explanation or instruction he receives in the target language (Selinker, 1972, p.
215) and his hypotheses on IL fossilization are fascinating in that they contradict our basic
understanding of the human capacity to learn. How is it that some learners can overcome IL
fossilization, even if they only constitute, according to Selinker, a mere 5% (1972, p. 212), while the
majority of L2 learners cannot, no matter what the age or amount of explanation or instruction? Or
is it perhaps not that they cannot overcome fossilization, but that they will not? Does complacency
Page | 61

set in after L2 learners begin to communicate, as far as they are concerned, effectively enough, in the
TL, and as a result does motivation to achieve native-like competence diminish?
The concept of fossilization in SLA research is so intrinsically related to IL that Selinker (1972)
considers it to be a fundamental phenomenon of all SLA and not just to adult learners. Fossilization
has received such wide recognition that it has been entered in the Random House Dictionary of the
English Language (1987). Selinkers concept of fossilization is similar to that of Tarone (1976), Nemser
(1971), and Sridhar (1980), all of whom attempted to explore the causes of fossilization in L2
learners IL.
Fossilization has attracted considerable interest among researchers and has engendered significant
differences of opinion. The term, borrowed from the field of paleontology, and actually a misnomer,
is effective because it conjures up an image of dinosaurs being enclosed in residue and becoming a
set of hardened remains encased in sediment. The metaphor, as used in SLA literature, is appropriate
because it refers to earlier language forms that become encased in a learners IL and that,
theoretically, cannot be changed by special attention or practice of the TL. Despite debate over the
degree of permanence, fossilization is generally accepted as a fact of life in the process of SLA.
Many researchers have attempted to explain this (Adjemian, 1976; Corder, 1971, 1978; De Prada
Creo, 1990; Nakuma, 1998; Selinker, 1972; Nemser, 1971; Schumann, 1976, 1978a, 1978b, 1990).
Workers have attempted to discover: 1) why fossilization occurs (Adjemian, 1976, Naiman, et al.,
1996; Schumann, 1976, 1978a, 1978b, 1990; Seliger, 1978; Stern, 1975; Virgil & Oller, 1976); 2) the
precipitating conditions (Schumann, 1976, 1978a, 1978b, 1990; Virgil & Oller, 1976); 3) what kind of
linguistic material is likely to be fossilized (Selinker & Lakshamanan 1992; Todeva, 1992); and 4) what
type of learners are more prone to fossilize (Adjemian, 1976; Scovel, 1969, 1978, 1988, 2000;
Selinker, Swain & Dumas, 1975; Virgil & Oller, 1976). However, there has been almost no
investigation by SLA theorists on the possibilities of preventing or overcoming fossilization, and little
explanation related to those adult L2 learners who overcome one or more areas of stability in IL
those learners whose IL does not fossilize, and who reach a high level of proficiency in the L2 (Acton,
1984; Birdsong, 1992; Bongaerts, et al., 1997; Ioup, Boustagui, El Tigi, & Mosell, 1994; Selinker,
1972).
One factor of obvious relevance is motivation, and studies have been conducted regarding
motivation to learning L2 (Gardner, 1988; Gardner & Smythe, 1975; Schumann. 1976, 1978a, l978b),
and the relationship of fossilization to the learners communicative needs (Corder, 1978; Nickel,
1998; Ushioda, 1993). Arguments have emerged regarding adult learners general lack of empathy
with TL native speakers and culture. According to Guiora et al. (1972), adults do not have the
motivation to change their accent and to acquire native-like pronunciation. Unlike children, who are
generally more open to TL culture, adults have more rigid language ego boundaries. Thus, adults may
be inclined to establishing their pre-existing cultural and ethnic identity, and this they do by
maintaining their stereotypical accent (Guiora et al., 1972). Notwithstanding this, there is a lack of
needed research, particularly regarding achievement motivation, especially considering that
fossilization can be considered the most distinctive characteristic of adult SLA.

Page | 62

Joint attention
Joint attention is the process by which one alerts another to a stimulus via nonverbal means, such
as gazing or pointing. For example, one person may gaze at another person, and then point to an
object, and then return their gaze back to the other person. In this case, the pointing person is
"initiating joint attention" by trying to get the other to look at the object. The person who looks to
the referenced object is "responding to joint attention." Joint attention is referred to a triadic skill,
meaning that it involves two people and a object or event outside of the duo. It is well
documented that infants display both types of joint attention at 9 months of age. Recently it was
discovered that infants as young as 3 months of age clearly discriminate between triadic and non
triadic contexts. Great apes (orangutans, gorillas, chimpanzees, and bonobo) also show some
understanding of joint attention. There is a debate in contemporary psychology about the
psychological significance of joint attention: the majority of theorists believe that although both
humans and the great apes use it as a means to an end, humans alone also use it for purely
altruistic communicative purposes, whereas a vocal minority maintain that joint attention is
always a means to an end (i.e., that "pure communication" in the infancy period is a myth), and
therefore joint attention by apes and humans reflects shared psychological processes.

Page | 63

Kelly's Personal Construct Theory


"A person's processes are psychologically channelized by the ways in which he anticipates events"
George Kelly, The Psychology of Personal Constructs, Corollaries
Construction Corollary. We anticipate future events according to our interpretations of recurrent
themes.
Individuality Corollary. People have different experiences and therefore construe events in different
ways.
Organization Corollary. We organize our personal contructs in a hierarchical system, with some
constructs in a superordinate position and others subordinate to them. This organization allows us to
minimize incompatible contructs.
Dichotomy Corollary. All personal constructs are dichotomous, that is, we construe events in an
either/or manner.
Choice Corollary. We choose the alternative in a dichotomized construct that we see as extending
our range of future choices.
Range Corollary. Constructs are limited to a particular range of convenience, that is, they are not
relevant to all situations.
Experience Corollary. We continually revise our personal constructs as the result of experience.
Modulation Corollary. Not all new experiences lead to a revision of personal constructs. To the extent
that constructs are permeable they are subject to change through experience. Concrete or
impermeable constructs resist modification regardless of our experience.
Fragmentation Corollary. Our behavior is sometimes inconsistent because our construct system can
readily admit incompatible elements.
Commonality Corollary. To the extent that we have had experiences similar to others, our personal
contructs tend to be similar to the construction systems of those people.
Sociality Corollary. We are able to communicate with others because we can construe their
constructions. We not only observe the behavior of others, but we also interpret what that behavior
means to them.

Page | 64

Language Acquisition
Language acquisition is the process by which humans acquire the capacity to perceive, produce and
use words to understand and communicate. This capacity involves the picking up of diverse
capacities including syntax, phonetics, and an extensive vocabulary. This language might be vocal as
with speech or manual as in sign. Language acquisition usually refers to first language acquisition,
which studies infants' acquisition of their native language, rather than second language acquisition
that deals with acquisition in both children and adults of additional languages.
The capacity to acquire and use language is a key aspect that distinguishes humans from other
organisms. While many forms of animal communication exist, they have a limited range of
nonsyntactically structured vocabulary tokens that lack cross cultural variation between groups.
A major concern in understanding language acquisition is how these capacities are picked up by
infants from what appears to be very little input. A range of theories of has been created in order
to explain this apparent problem including innatism in which a child is born prepared in some
manner with these capacities, or whether these are learned.
History
Plato felt that the word-meaning mapping in some form was innate. Sanskrit grammarians debated
over twelve centuries whether meaning was god-given (possibly innate) or was learned from older
convention - e.g. a child learning the word for cow by listening to trusted speakers talking about
cows.
In modern times, empiricists like Hobbes and Locke argued that knowledge (and for Locke, language)
emerge ultimately from abstracted sense impressions. This led to Carnap's Aufbau, an attempt to
learn all knowledge from sense datum, using the notion of "remembered as similar" to bind these
into clusters, which would eventually map to language.
Under Behaviorism, it was argued that language may be learned through a form of operant
conditioning. In B.F. Skinner's Verbal Behaviour (1957), he suggested that the successful use of a sign
such as a word or lexical unit, given a certain stimulus, reinforces its "momentary" or contextual
probability. Empiricist theories of language acquisition include statistical learning theories of
language acquisition, Relational Frame Theory, functionalist linguistics, Social interactionist theory,
and usage-based language acquisition.
This behaviourist idea was strongly attacked by Noam Chomsky in a review article in 1959, calling it
"largely mythology" and a "serious delusion. Instead, Chomsky argued for a more theoretical
approach, based on a study of syntax.
General approaches
Social interactionism
Main article: Social interactionist theory
Social interactionist theory consists of a number of hypotheses on language acquisition. These
hypotheses deal with written, spoken, or visual social tools which consist of complex systems of
symbols and rules on language acquisition and development. The compromise between nature and
nurture is the interactionist approach.
Relational frame theory
Main article: relational frame theory
The relational frame theory (Hayes, Barnes-Holmes, Roche, 2001), provides a wholly
selectionist/learning account of the origin and development of language competence and
complexity. Based upon the principles of Skinnerian behaviorism, RFT posits that children acquire
language purely through interacting with the environment. RFT theorists introduced the concept of
Page | 65

functional contextualism in language learning, which emphasizes the importance of predicting and
influencing psychological events, such as thoughts, feelings, and behaviors, by focusing on
manipulable variables in their context. RFT distinguishes itself from Skinner's work by identifying and
defining a particular type of operant conditioning known as derived relational responding, a learning
process that to date appears to occur only in humans possessing a capacity for language. Empirical
studies supporting the predictions of RFT suggest that children learn language via a system of
inherent reinforcements, challenging the view that language acquisition is based upon innate,
language-specific cognitive capacities.
Emergentism
Emergentist theories, such as MacWhinney's Competition Model, posit that language acquisition is a
cognitive process that emerges from the interaction of biological pressures and the environment.
According to these theories, neither nature nor nurture alone is sufficient to trigger language
learning; both of these influences must work together in order to allow children to acquire a
language. The proponents of these theories argue that general cognitive processes subserve
language acquisition and that the end result of these processes is language-specific phenomena,
such as word learning and grammar acquisition. The findings of many empirical studies support the
predictions of these theories, suggesting that language acquisition is a more complex process than
many believe.

Syntax
A key question is the origin of a child's capacity to process syntax and there has been opposing
approach that either claim this arise from innate competence or some kind of learning from input.
Generativism
Chomsky's generative grammar ignores semantics and language use, focusing on the set of rules
that would generate syntactically correct strings. This led to a model of acquisition which
attempted to discover grammar from examples of well-formed sentences, ignoring semantics or
context. However, it turns out that infinitely many rule-sets or grammars can explain the data, so
discovering one would be difficult. Indeed, trained linguists working for decades have not been able
to identify a grammar for any human language. Also, the input available to the child learner was
deemed insufficient (the poverty of stimulus argument).
As a result, Chomsky, Jerry Fodor, Eric Lenneberg and others to suggest that some form of
grammar must be innate (the nativist position) .
What is innate was claimed to be a universal grammar, initially connected to an organ called the
language acquisition device (LAD) . Subsequently, the word organ was replaced by the phrase
"language faculty" and Chomsky suggested that what was universal across all languages were a set
of principles, that were modified for each particular language by a set of parameters.
Nativists view that there are some "hidden assumptions" or biases that allow children to quickly
figure out what is and isn't possible in the grammar of their native language, and allow them to
master that grammar by the age of three.
Empiricism
Since Chomsky in the 1950's, many criticisms of the basic assumptions of generative theory have
been put forth. Critics argue that the concept of a Language Acquisition Device (LAD) is unsupported
by evolutionary anthropology, which tends to show a gradual adaptation of the human brain and
vocal chords to the use of language, rather than a sudden appearance of a complete set of binary
parameters delineating the whole spectrum of possible grammars ever to have existed and ever to
exist. (Binary parameters are common to digital computers but not, as it turns out, to neurological
systems such as the human brain.)
Page | 66

Further, while generative theory has several hypothetical constructs (such as movement, empty
categories, complex underlying structures, and strict binary branching) that cannot possibly be
acquired from any amount of linguistic input, it is unclear that human language is actually anything
like the generative conception of it. Since language, as imagined by nativists, is unlearnably complex,
subscribers to this theory argue that it must therefore be innate. A different theory of language,
however, may yield different conclusions. While all theories of language acquisition posit some
degree of innateness, a less convoluted theory might involve less innate structure and more learning.
Under such a theory of grammar, the input, combined with both general and language-specific
learning capacities, might be sufficient for acquisition.
Since 1980, linguists studying children, such as Melissa Bowerman, and psychologists following
Piaget, like Elizabeth Bates and Jean Mandler, came to suspect that there may indeed be many
learning processes involved in the acquisition process, and that ignoring the role of learning may
have been a mistake.
In recent years, opposition to the nativist position has multiplied. The debate has centered on
whether the inborn capabilities are language-specific or domain-general, such as those that enable
the infant to visually make sense of the world in terms of objects and actions. The anti-nativist view
has many strands, but a frequent theme is that language emerges from usage in social contexts,
using learning mechanisms that are a part of a general cognitive learning apparatus (which is what is
innate). This position has been championed by Elizabeth Bates, Catherine Snow, Brian MacWhinney,
Michael Tomasello , Michael Ramscar ,William O'Grady , and others. Philosophers, such as Fiona
Cowie and Barbara Scholz with Geoffrey Pullum have also argued against certain nativist claims in
support of empiricism.
Statistical learning
Some language acquisition researchers, such as Elissa Newport, Richard Aslin, and Jenny Saffran,
believe that language acquisition is based primarily on general learning mechanisms, namely
statistical learning. The development of connectionist models that are able to successfully learn
words and syntactical conventions supports the predictions of statistical learning theories of
language acquisition, as do empirical studies of children's learning of words and syntax.
Chunking
Chunking theories of language acquisition constitute a group of theories related to statistical learning
theories in that they assume that the input from the environment plays an essential role; however,
they postulate different learning mechanisms. The central idea of these theories is that language
development occurs through the incremental acquisition of meaningful chunks (chunks) of
elementary constituents, which can be words, phonemes, or syllables. Recently, this approach has
been highly successful in simulating several phenomena in the acquisition of syntactic categories
and the acquisition of phonological knowledge. The approach has several features that make it
unique: the models are implemented as computer programs, which enable clear-cut and quantitative
predictions to be made; they learn from naturalistic input, made of actual child-directed utterances;
they produce actual utterances, which can be compared with childrens utterances; and they have
simulated phenomena in several languages, including English, Spanish, and German.
Researchers at the Max Planck Institute for Evolutionary Anthropology have developed a computer
model analyzing early toddler conversations to predict the structure of later conversations. They
showed that toddlers develop their own individual rules for speaking with slots into which they could
put certain kinds of words. A significant outcome of the research was that rules inferred from toddler
speech were better predictors of subsequent speech than traditional grammars.
Vocabulary
The capacity to acquire the ability to incorporate the pronunciation of new words depends upon the
capacity to engage in speech repetition. [24][25] Children with reduced abilities to repeat nonwords
Page | 67

(a marker of speech repetition abilities show a slow rate of vocabulary expansion than children for
whom this easy. It has been proposed that the elementary units of speech has been selected to
enhance the ease with which sound and visual input can be mapped into motor vocalization.
Meaning
Children learn on average 10 to 15 new word meanings each day, but only one of these words can be
accounted for by direct instruction. The other nine to 14 word meanings need to be picked up in
some other way. It has been proposed that children acquire these meanings with the use of
processed modeled by Latent semantic analysis. This happens as children when they meet an
unfamiliar word, can use information in its context to correctly guess its rough area of meaning.

Language learning aptitude


According to John B. Carroll and Stanley Sapon, the authors of the Modern Language Aptitude Test,
the term language learning aptitude refers to the prediction of how well, relative to other
individuals, an individual can learn a foreign language in a given amount of time and under given
conditions.
As with many measures of aptitude, language learning aptitude is thought to be relatively stable
throughout an individuals lifetime.
Language learning disability
Some high schools, universities or other institutions will interpret low language learning aptitude as a
sign of a language learning disability. A pattern of evidence from several sources can help to diagnose
a foreign language learning disability. Evidence can come from scoring poorly on language learning
aptitude assessments, like the Modern Language Aptitude Test, Pimsleur Language Aptitude Battery,
Modern Language Aptitude Test - Elementary or Defense Language Aptitude Battery, while attaining
average or above-average scores on aptitude assessments in other areas, like general intelligence. A
history of scoring poorly on an array of language aptitude tests taken at the appropriate time (MLATE for grades 3-6, PLAB for grades 7-12, MLAT for adults) can provide even stronger evidence for a
language learning disability. Evidence can also come from comparing a poor past performance in
foreign language courses with average or above-average performance in other courses unrelated to
language learning.

John B. Carroll
John B. Carroll, an influential psychologist in the field of educational linguistics, developed a theory
about a cluster of four abilities that factored into language learning aptitude, separate from verbal
intelligence and motivation. Using these four distinct abilities (phonetic coding ability, grammatical
sensitivity, rote learning ability, and inductive learning ability), Carroll developed the MLAT, a
language aptitude assessment for adults.
The four ability components are defined as follows:
* Phonetic coding ability ability to perceive distinct sounds, associate a symbol with that sound and
retain that association
* Grammatical sensitivity ability to recognize the grammatical function of a lexical element (word,
phrase, etc.) in a sentence without explicit training in grammar
* Rote learning ability ability to learn associations between words in a foreign language and their
meanings and retain that association
Page | 68

* Inductive learning ability ability to infer or induce rules governing the structure of a language
Paul Pimsleur
Paul Pimsleur, also known for the Pimsleur language learning system, spent time researching four
factors that he believed to be related to language learning aptitude. Pimsleur included grade point
average as an indication of general academic achievement as well as motivation in his factors. In
addition, the verbal ability factor indicated how well a student would be able to handle the
mechanics of learning a foreign language and the auditory factor indicated how well a student would
be able to listen to and produce phrases in a foreign language. To test these four factors, Pimsleur
developed the Pimsleur Language Aptitude Battery, appropriate for students in grades 7-12.

Aptitude Measurements
* Modern Language Aptitude Test mainly authored by John B. Carroll, appropriate for adults,
mainly used by government and military institutions to select and place employees for language
training
* Defense Language Aptitude Battery developed and used by the United States Department of
Defense to select candidates for jobs that will require them to attain fluency in a foreign language
* Pimsleur Language Aptitude Battery authored by Paul Pimsleur, used to assess the language
learning aptitude of students in grades 7 to 12
* Modern Language Aptitude Test Elementary mainly authored by John B. Carroll, appropriate for
children in grades 3 to 6
Uses of aptitude measurement
Measurements of language learning aptitude are used in many different ways. The United States
Department of Defense uses a measurement of language learning aptitude, the Defense Language
Aptitude Battery, to help place employees in positions that require them to learn a new language.
Governmental agencies use the MLAT as a tool to select and place employees in intensive language
training programs. Businesses and missionaries use the MLAT to select, place and plan for language
training. Universities, colleges and high schools use the MLAT to help in the diagnosis of foreign
language learning disabilities. Although each institution has its own policy, many will waive a foreign
language requirement in cases of a foreign language learning disability in favor of a history or
linguistic course.
Schools use the PLAB and MLAT-E to place students in suitable language courses, build a history of a
foreign language learning difficulty, identify especially gifted students in respect to language learning
and to match learning styles with instructional styles.

Language Disability
Any pathological condition which has adverse consequences for the sufferers ability to use language
normally. Language disabilities must be carefully distinguished from speech defects like lisping and
inability to produce a tapped /r/; these are purely mechanical problems with the nerves and muscles
controlling the organs of speech, and they have no effect upon the language faculty itself (though
stammering, or stuttering in the USA, does seem to involve some cognitive interference as well).
Page | 69

A true disability results either from a genetic defect or from damage to the language areas in the
brain. Since the possible types of damage and defects are virtually limitless, individual sufferers
naturally exhibit an enormous range of disabilities: we hardly ever find two individuals with exactly
the same symptoms. Nevertheless, it has proved possible to identify a number of fairly specific
disabilities and, in many cases, to associate these with particular genetic defects or with damage to
particular areas of the brain.
The best-known disabilities are the several types of aphasia, all of which result from injury to more or
less identifiable areas of the brain. But other types exist. For example, the Williams syndrome is
known to result from a genetic defect in Chromosome No. 11; this defect causes both some highly
specific physiological abnormalities and some rather consistent abnormalities in language use. Less
well understood is Specific Language Impairment, or SLI, which devastates the ability to use the
grammatically inflected forms of words correctly (such as take, takes, took, taking) but which has
only a few non-linguistic consequences; the Canadian-based linguist Myrna Gopnik has recently
uncovered evidence that this impairment too may result from a specific genetic disorder, though one
which has not yet been identified.
But the known range of disabilities is positively startling. A bilingual sufferer may lose one language
completely but retain the second perfectly, and then, after some time, may lose the second but
regain the first. Many types of disability are characterized by greater or lesser degrees of anomia, the
inability to remember words for things. But some sufferers exhibit astoundingly specific deficits, such
as a total inability to use or understand words denoting fruits and vegetables, with no other
problems at all. Some people lose verbs but retain nouns, and so they have no trouble with the noun
milk (as in a glass of milk), but cannot handle the verb milk (as in to milk a cow). Some elderly
sufferers from Alzheimers disease find reverting to languages, dialects and accents that they spoke
in their youth, and lose abilities that they gained in later life.
In spite of some impressive progress, we are still very far from understanding most types of language
disability.

Language Instinct
The powerful tendency of children to acquire language. Any physically normal child who is
adequately exposed to a language will learn it perfectly, and a child exposed to two or three
languages will learn all of them. A hearing child normally learns the surrounding spoken language. A
deaf child exposed to a sign language will learn that. Children exposed only to a pidgin will turn that
pidgin into a full language: a creole. A group of children exposed to no language at all will invent their
own and use it.
For a long time, linguists were slow to appreciate the significance of these observations, and both
creoles and sign languages were widely regarded as peripheral and even inconsequential phenomena
scarcely deserving of serious study by linguists. But times have changed. Very gradually at first, and
then, from about the 1970s, almost explosively, the examination of these phenomena convinced
almost all working linguists that creoles and sign languages were every bit as central to the discipline
as spoken languages with a long history.
We now realize that children are born with a powerful biological drive to learn language, and that
only shattering disability or inhuman
cruelty can prevent a child from acquiring language by one means or another. In the 1990s, the
Canadian psycholinguist Steven Pinker coined the felicitous term language instinct to denote this
remarkable aspect of our biological endowment. Our language faculty, we now strongly suspect, is
built into our genes, and learning a first language may not be so different from learning to see: at
birth, our visual apparatus is not working properly, and it requires some exposure to the visible
world before normal vision is acquired.
Page | 70

Language Ego
Several decades ago, Alexander Guiora, a researcher in the study of personality variables in second
language learning, proposed what he called the language ego (Guiora et al. 1972b; see also Ehrman
1993) to account for the identity a person develops in reference to the language he or she speaks.
For any monolingual person, the language ego involves the interaction of the native language and
ego development. One's self-identity is inextricably bound up with one's language, for it is in the
communicative processthe process of sending out messages and having them "bounced" back
that such identities are confirmed, shaped, and reshaped. Guiora suggested that the language ego
may account for the difficulties that adults have in learning a second language. The child's ego is
dynamic and growing and flexible through the age of puberty. Thus a new language at this stage does
not pose a substantial "threat" or inhibition to the ego, and adaptation is made relatively easily as
long as there are no undue confounding socio-cultural factors such as, for example, a damaging
attitude toward a language or language group at a young age. Then the simultaneous physical,
emotional, and cognitive changes of puberty give rise to a defensive mechanism in which the
language ego becomes protective and defensive. The language ego clings to the security of the native
language to protect the fragile ego of the young adult. The language ego, which has now become
part and parcel of self-identity, is threatened, and thus a context develops in which you must be
willing to make a fool of yourself in the trial-and-error struggle of speaking and understanding a
foreign language. Younger children are less frightened because they are less aware of language
forms, and the possibility of making mistakes in those formsmistakes that one really must make in
an attempt to communicate spontaneouslydoes not concern them greatly.
It is no wonder, then, that the acquisition of a new language ego is an enormous undertaking not
only for young adolescents but also for an adult who has grown comfortable and secure in his or her
own identity and who possesses inhibitions that serve as a wall of defensive protection around the
ego. Making the leap to a new or second identity is no simple matter; it can be successful only when
one musters the necessary ego strength to overcome inhibitions. It is possible that the successful
adult language learner is someone who can bridge this affective gap. Some of the seeds of success
might have been sown early in life. In a bilingual setting, for example, if a child has already learned
one second language in childhood, then affectively, learning a third language as an adult might
represent much less of a threat. Or such seeds may be independent of a bilingual setting; they may
simply have arisen out of whatever combination of nature and nurture makes for the development of
a strong ego.
In looking at SLA in children, it is important to distinguish younger and older children. Preadolescent
children of nine or ten, for example, are beginning to develop inhibitions, and it is conceivable that
children of this age have a good deal of affective dissonance to overcome as they attempt to learn a
second language. This could account for difficulties that older pre-pubescent children encounter in
acquiring a second language. Adult vs. child comparisons are of course highly relevant. We know
from both observational and research evidence that mature adults manifest a number of inhibitions.
These inhibitions surface in modern language classes where the learner's attempts to speak in the
foreign language are often fraught with embarrassment. We have also observed the same inhibition
in the "natural" setting (a non-classroom setting, such as a learner living in a foreign culture),
although in such instances there is the likelihood that the necessity to communicate overrides the
inhibitions.
Other affective factors seem to hinge on the basic notion of ego identification. It would appear that
the study of second language learning as the acquisition of a second identity might pose a fruitful and
important issue in understanding not only some differences between child and adult first and second
language learning but second language learning in general.

Page | 71

Learning Styles
Learning styles are various approaches or ways of learning. They involve educating methods,
particular to an individual that are presumed to allow that individual to learn best. It is commonly
believed that most people favor some particular method of interacting with, taking in, and
processing stimuli or information. Based on this concept, the idea of individualized "learning styles"
originated in the 1970s, and has gained popularity in recent years. It has been proposed that
teachers should assess the learning styles of their students and adapt their classroom methods to
best fit each student's learning style. The alleged basis for these proposals has been extensively
criticized.
Models
David Kolb's Model
The David Kolb styles model is based on the Experiential Learning Theory, as explained in David A.
Kolb's book Experiential Learning: Experience as the source of learning and development (1984) . The
ELT model outlines two related approaches toward grasping experience: Concrete Experience and
Abstract Conceptualization, as well as two related approaches toward transforming experience:
Reflective Observation and Active Experimentation. According to Kolbs model, the ideal learning
process engages all four of these modes in response to situational demands. In order for learning to
be effective, all four of these approaches must be incorporated. As individuals attempt to use all four
approaches, however, they tend to develop strengths in one experience-grasping approach and one
experience-transforming approach. The resulting learning styles are combinations of the individuals
preferred approaches. These learning styles are as follows:
1. Converger;
2. Diverger;
3. Assimilator;
4. Accommodator;
Convergers are characterized by abstract conceptualization and active experimentation. They are
good at making practical applications of ideas and using deductive reasoning to solve problems .
Divergers tend toward concrete experience and reflective observation. They are imaginative and are
good at coming up with ideas and seeing things from different perspectives.
Assimilators are characterized by abstract conceptualization and reflective observation. They are
capable of creating theoretical models by means of inductive reasoning.
Accommodators use concrete experience and active experimentation. They are good at actively
engaging with the world and actually doing things instead of merely reading about and studying
them.
Kolbs model gave rise to the Learning Style Inventory, an assessment method used to determine an
individual's learning style. An individual may exhibit a preference for one of the four styles
Accommodating, Converging, Diverging and Assimilating depending on his approach to learning via
the experiential learning theory model.
Honey and Mumfords Model
In the mid 1970s Peter Honey and Alan Mumford adapted David Kolbs model for use with a
population of middle/senior managers in business. They published their version of the model in The
Manual of Learning Styles (1982) and Using Your Learning Styles (1983) .

Page | 72

Two adaptations were made to Kolbs experiential model. Firstly, the stages in the cycle were
renamed to accord with managerial experiences of decision making/problem solving. The Honey &
Mumford stages are:
1. Having an experience
2. Reviewing the experience
3. Concluding from the experience
4. Planning the next steps.
Secondly, the styles were directly aligned to the stages in the cycle and named Activist, Reflector,
Theorist and Pragmatist. These are assumed to be acquired preferences that are adaptable, either at
will or through changed circumstances, rather than being fixed personality characteristics. The Honey
& Mumford Learning Styles Questionnaire (LSQ) is a self-development tool and differs from Kolbs
Learning Style inventory by inviting managers to complete a checklist of work-related behaviours
without directly asking managers how they learn. Having completed the self-assessment, managers
are encouraged to focus on strengthening underutilised styles in order to become better equipped to
learn from a wide range of everyday experiences.
A MORI survey commissioned by [The Campaign for Learning] in 1999 found the Honey & Mumford
LSQ to be the most widely used system for assessing preferred learning styles in the local
government sector in the UK.
Anthony Gregorc's Model
Dennis W. Mills, Ph.D., discusses the work of Anthony F. Gregorc and Kathleen A. Butler in his article
entitled Applying What We Know: Student Learning Styles. Gregorc and Butler worked to organize
a model describing how the mind works . This model is based on the existence of perceptionsour
evaluation of the world by means of an approach that makes sense to us. These perceptions in turn
are the foundation of our specific learning strengths, or learning styles.
In this model, there are two perceptual qualities 1) concrete and 2) abstract; and two ordering
abilities 1) random and 2) sequential.
Concrete perceptions involve registering information through the five senses, while abstract
perceptions involve the understanding of ideas, qualities, and concepts which cannot be seen.
In regard to the two ordering abilities, sequential involves the organization of information in a linear,
logical way and random involves the organization of information in chunks and in no specific order.
Both of the perceptual qualities and both of the ordering abilities are present in each individual, but
some qualities and ordering abilities are more dominant within certain individuals.
There are four combinations of perceptual qualities and ordering abilities based on dominance: 1)
Concrete Sequential; 2) Abstract Random; 3) Abstract Sequential; 4) Concrete Random. Individuals
with different combinations learn in a different waysthey have different strengths, different things
make sense to them, different things are difficult for them, and they ask different questions
throughout the learning process .
Sudbury Model of Democratic Education
Some critics of today's schools, of the concept of learning disabilities, of special education, and of
response to intervention, take the position that every child has a different learning style and pace
and that each child is unique, not only capable of learning but also capable of succeeding.
Sudbury Model democratic schools assert that there are many ways to study and learn. They argue
that learning is a process you do, not a process that is done to you. That is true of everyone; it's basic.
The experience of Sudbury model democratic schools shows that there are many ways to learn
Page | 73

without the intervention of teaching, to say, without the intervention of a teacher being imperative.
In the case of reading for instance in the Sudbury model democratic schools, some children learn
from being read to, memorizing the stories and then ultimately reading them. Others learn from
cereal boxes, others from games instructions, others from street signs. Some teach themselves letter
sounds, others syllables, others whole words. Sudbury model democratic schools adduce that in their
schools no one child has ever been forced, pushed, urged, cajoled, or bribed into learning how to
read or write; and they have had no dyslexia. None of their graduates are real or functional
illiterates, and no one who meets their older students could ever guess the age at which they first
learned to read or write. In a similar form students learn all the subjects, techniques, and skills in
these schools.
Describing current instructional methods as homogenization and lockstep standardization,
alternative approaches are proposed, such as the Sudbury Model of Democratic Education schools,
an alternative approach in which children, by enjoying personal freedom thus encouraged to exercise
personal responsibility for their actions, learn at their own pace and style rather than following a
compulsory and chronologically-based curriculum.
Proponents of unschooling have also claimed that children raised in this method learn at their own
pace and style, and do not suffer from learning disabilities.
Gerald Coles asserts that there are partisan agendas behind the educational policy-makers and that
the scientific research that they use to support their arguments regarding the teaching of literacy are
flawed. These include the idea that there are neurological explanations for learning disabilities.
Other models
Aiming to explain why aptitude tests, school grades, and classroom performance often fail to identify
real ability, Robert J. Sternberg listed various cognitive dimensions in his book Thinking Styles (1997).
Several other models are also often used when researching learning styles. This includes the Myers
Briggs Type Indicator (MBTI) and the DISC assessment.
One of the most common and widely-used categorizations of the various types of learning styles is
Fleming's VARK model which expanded upon earlier Neuro-linguistic programming (VAK) models :

1. visual learners;
2. auditory learners;
3. reading/writing-preference learners;
4. kinesthetic learners or tactile learners .
Fleming claimed that visual learners have a preference for seeing (think in pictures; visual aids such
as overhead slides, diagrams, handouts, etc.). Auditory learners best learn through listening (lectures,
discussions, tapes, etc.). Tactile/kinesthetic learners prefer to learn via experiencemoving,
touching, and doing (active exploration of the world; science projects; experiments, etc.). Its use in
pedagogy allows teachers to prepare classes that address each of these areas. Students can also use
the model to identify their learning style and maximize their educational experience by focusing on
what benefits them the most.
Yet another model and perspective on learning styles
Chris J Jackson's neuropsychological hybrid model of learning in personality argues Sensation Seeking
provides a core biological drive of curiosity, learning and exploration. A high drive to explore leads to
dysfunctional learning consequences unless cognitions such as goal orientation, conscientiousness,
deep learning and emotional intelligence re-express it in more complex ways to achieve functional
Page | 74

outcomes such as high work performance. Latest research is summarized here. Evidence for this
model is allegedly impressive [24]. It is a new model of learning and therefore remains in need of
verification by independent research. Siadaty & Taghiyareh (2007)[25] report that training based on
Conscientious Achievement increases performance but that training based on Sensation Seeking
does not. These results strongly support Jackson's model since the model proposes that
Conscientious Achievement will respond to intervention whereas Sensation Seeking (with its
biological basis) will not. Jackson's papers can be downloaded here.
Assessment Methods
Learning Style Inventory

The Learning Style Inventory (LSI) is connected with Kolbs model and is used to determine a
students learning style . The LSI assesses an individuals preferences and needs regarding the
learning process. It does the following: (1) allows students to designate how they like to learn and
indicates how consistent their responses are, (2) provides computerized results which show the
students preferred learning style, (3) provides a foundation upon which teachers can build in
interacting with students, (4) provides possible strategies for accommodating learning styles, (5)
provides for student involvement in the learning process; 6) provides a class summary so students
with similar learning styles can be grouped together .
Other Methods
Other methods (usually questionnaires) used to identify learning styles include Fleming's VARK
Learning Style Test, Jackson's Learning Styles Profiler (LSP), and the NLP meta programs based iWAM
questionnaire. Many other tests have gathered popularity and various levels of credibility among
students and teachers.
Criticism
Learning-style theories have been criticized by many.
Some psychologists and neuroscientists have questioned the scientific basis for these models and the
theories on which they are based. Writing in the Times Educational Supplement Magazine (29 July
2007), Susan Greenfield said that "from a neuroscientific point of view [the learning styles approach
to teaching] is nonsense".

Many educational psychologists believe that there is little evidence for the efficacy of most learning
style models, and furthermore, that the models often rest on dubious theoretical grounds.
According to Stahl, there has been an "utter failure to find that assessing children's learning styles
and matching to instructional methods has any effect on their learning." Guy Claxton has questioned
the extent that learning styles such as VAK are helpful, particularly as they can have a tendency to
label children and therefore restrict learning.
The critique made by Coffield, et al.
A non-peer-reviewed literature review by authors from the University of Newcastle upon Tyne
identified 71 different theories of learning style. This report, published in 2004, criticized most of the
main instruments used to identify an individual's learning style. In conducting the review, Coffield
and his colleagues selected 13 of the most influential models for closer study, including most of the
models cited on this page. They examined the theoretical origins and terms of each model, and the
instrument that was purported to assess types of learning style defined by the model. They analyzed
the claims made by the author(s), external studies of these claims, and independent empirical
evidence of the relationship between the 'learning style' identified by the instrument and students'
actual learning. Coffield's team found that none of the most popular learning style theories had been
Page | 75

adequately validated through independent research, leading to the conclusion that the idea of a
learning cycle, the consistency of visual, auditory and kinesthetic preferences and the value of
matching teaching and learning styles were all "highly questionable."
One of the most widely-known theories assessed by Coffield's team was the learning styles model of
Dunn and Dunn, a VAK model. This model is widely used in schools in the United States, and 177
articles have been published in peer-reviewed journals referring to this model . The conclusion of
Coffield et al. was as follows:
Despite a large and evolving research programme, forceful claims made for impact are questionable
because of limitations in many of the supporting studies and the lack of independent research on the
model.
In contrast, a 2005 report provided evidence confirming the validity of Dunn and Dunn's model,
concluding that "matching students learning-style preferences with complementary instruction
improved academic achievement and student attitudes toward learning." This meta-analysis, made
by one of Rita Dunn's students, does not take into account the previous criticism of the research.
Coffield's critique of Gregorc's Style Delineator
Coffield's team claimed that another model, Gregorc's Style Delineator (GSD), was "theoretically and
psychometrically flawed" and "not suitable for the assessment of individuals."
The critique regarding Kolb's model
Mark K. Smith compiled and reviewed some critiques of Kolbs model in his article, David A. Kolb on
Experiential Learning. According to Smiths research, there are six key issues regarding the model.
They are as follows: 1) the model doesnt adequately address the process of reflection; 2) the claims
it makes about the four learning styles are extravagant; 3) it doesnt sufficiently address the fact of
different cultural conditions and experiences; 4) the idea of stages/steps doesnt necessarily match
reality; 5) it has only weak empirical evidence; 6) the relationship between learning processes and
knowledge is more complex than Kolb draws it.
Other critiques
Coffield and his colleagues and Mark Smith are not alone in their judgements. Demos, a UK think
tank, published a report on learning styles prepared by a group chaired by David Hargreaves that
included Usha Goswami from Cambridge University and David Wood from the University of
Nottingham. The Demos report said that the evidence for learning styles was "highly variable", and
that practitioners were "not by any means frank about the evidence for their work."
Cautioning against interpreting neuropsychological research as supporting the applicability of
learning style theory, John Geake, Professor of Education at the UK's Oxford Brookes University, and
a research collaborator with Oxford University's Centre for Functional Magnetic Resonance Imaging
of the Brain, commented that
We need to take extreme care when moving from the lab to the classroom. We do remember
things visually and aurally, but information isn't defined by how it was received.
Applications: Learning Styles in the Classroom
Various researchers have attempted to provide ways in which learning style theory can take effect in
the classroom. Two such scholars are Dr. Rita Dunn and Dr. Kenneth Dunn.
In their book, Teaching Students Through Their Individual Learning Styles: A Practical Approach, they
give a background of how learners are affected by elements of the classroom and follow it with
recommendations of how to accommodate students learning strengths. Dunn and Dunn write that
learners are affected by their: (1) immediate environment (sound, light, temperature, and design);
(2) own emotionality (motivation, persistence, responsibility, and need for structure or flexibility); (3)
Page | 76

sociological needs (self, pair, peers, team, adult, or varied); and (4) physical needs (perceptual
strengths, intake, time, and mobility).
They analyze other research and make the claim that not only can students identify their preferred
learning styles, but that students also score higher on tests, have better attitudes, and are more
efficient if they are taught in ways to which they can more easily relate. Therefore, it is to the
educators advantage to teach and test students in their preferred styles.
Although learning styles will inevitably differ among students in the classroom, Dunn and Dunn say
that teachers should try to make changes in their classroom that will be beneficial to every learning
style. Some of these changes include room redesign, the development of small-group techniques,
and the development of Contract Activity Packages . Redesigning the classroom involves locating
dividers that can be used to arrange the room creatively (such as having different learning stations
and instructional areas), clearing the floor area, and incorporating student thoughts and ideas into
the design of the classroom.
Small-group techniques often include a circle of knowledge in which students sit in a circle and
discuss a subject collaboratively as well as other techniques such as team learning and brainstorming.
Contract Activity Packages are educational plans that facilitate learning by using the following
elements: 1) clear statement of what the students needs to learn; 2) multisensory resources
(auditory, visual, tactile, kinesthetic) that teach the required information; 3) activities through which
the newly-mastered information can be used creatively; 4) the sharing of creative projects within
small groups of classmates; 5) at least 3 small-group techniques; 6) a pre-test, a self-test, and a posttest .
Another scholar who believes that learning styles should have an effect on the classroom is Marilee
Sprenger, as evidenced by her book, Differentiation through Learning Styles and Memory.
Sprenger bases her recommendations for classroom learning on three premises: 1) Teachers can be
learners, and learners can be teachers. We are all both. 2) Everyone can learn under the right
circumstances. 3) Learning is fun! Make it appealing.
She details various ways in which teachers can teach so that students will remember. She categorizes
these teaching methods according to which learning style they fitvisual, auditory, or
tactile/kinesthetic.
Methods for visual learners include ensuring that students can see words written down, using
pictures when describing things, drawing time lines for events in history, writing assignments on the
board, using overhead transparencies/handouts, and writing down instructions.
Methods for auditory learners include repeating difficult words and concepts aloud, incorporating
small-group discussion, organizing debates, listening to books on tape, writing oral reports, and
encouraging oral interpretation.
Methods for tactile/kinesthetic learners include providing hands-on activities (experiments, etc.),
assigning projects, having frequent breaks to allow movement, using visual aids and objects in the
lesson, using role play, and having field trips .
By using a variety of teaching methods from each of these categories, teachers are able to
accommodate different learning styles. They are also able to challenge students to learn in different
ways. Just as Kolb suggested that students who use all 4 approaches of his learning cycle learn more
effectively, students who are able to learn through a variety of ways are more effective learners.

Learning Theory
In psychology and education, a common definition of learning is a process that brings together
cognitive, emotional, and environmental influences and experiences for acquiring, enhancing, or
Page | 77

making changes in one's knowledge, skills, values, and world views (Illeris,2000; Ormorod, 1995).
Learning as a process focuses on what happens when the learning takes place. Explanations of what
happens constitute learning theories. A learning theory is an attempt to describe how people and
animals learn, thereby helping us understand the inherently complex process of learning. Learning
theories have two chief values according to Hill (2002). One is in providing us with vocabulary and a
conceptual framework for interpreting the examples of learning that we observe. The other is in
suggesting where to look for solutions to practical problems. The theories do not give us solutions,
but they do direct our attention to those variables that are crucial in finding solutions.
There are three main categories or philosophical frameworks under which learning theories fall:
behaviorism, cognitivism, and constructivism. Behaviorism focuses only on the objectively observable
aspects of learning. Cognitive theories look beyond behavior to explain brain-based learning. And
constructivism views learning as a process in which the learner actively constructs or builds new
ideas or concepts.

Behaviorism
Behavorism as a theory was most developed by B. F. Skinner. It loosely includes the work of such
people as Thorndike, Tolman, Guthrie, and Hull. What characterizes these investigators is their
underlying assumptions about the process of learning. In essence, three basic assumptions are held
to be true. First, learning is manifested by a change in behavior. Second, the environment shapes
behavior. And third, the principles of contiguity (how close in time, two events must be for a bond to
be formed) and reinforcement (any means of increasing the likelihood that an event will be
repeated) are central to explaining the learning process. For behaviorism, learning is the acquisition
of new behavior through conditioning.
There are two types of possible conditioning:
1) Classical conditioning, where the behavior becomes a reflex response to stimulus as in the case of
Pavlov's Dogs. Pavlov was interested in studying reflexes, when he saw that the dogs drooled without
the proper stimulus. Although no food was in sight, their saliva still dribbled. It turned out that the
dogs were reacting to lab coats. Every time the dogs were served food, the person who served the
food was wearing a lab coat. Therefore, the dogs reacted as if food was on its way whenever they
saw a lab coat.In a series of experiments, Pavlov then tried to figure out how these phenomena were
linked. For example, he struck a bell when the dogs were fed. If the bell was sounded in close
association with their meal, the dogs learned to associate the sound of the bell with food. After a
while, at the mere sound of the bell, they responded by drooling.
2) Operant conditioning where there is reinforcement of the behavior by a reward or a punishment.
The theory of operant conditioning was developed by B.F. Skinner and is known as Radical
Behaviorism. The word operant refers to the way in which behavior operates on the environment.
Briefly, a behavior may result either in reinforcement, which increases the likelihood of the behavior
recurring, or punishment, which decreases the likelihood of the behavior recurring. It is important to
note that, a punishment is not considered to be applicable if it does not result in the reduction of the
behavior, and so the terms punishment and reinforcement are determined as a result of the actions.
Within this framework, behaviorists are particularly interested in measurable changes in behavior.
Educational approaches such as applied behavior analysis, curriculum based measurement, and
direct instruction have emerged from this model.[citation needed]
Cognitivism
The earliest challenge to the behaviorists came in a publication in 1929 by Bode, a gestalt
psychologist. He criticized behaviorists for being too dependent on overt behavior to explain
Page | 78

learning. Gestalt psychologists proposed looking at the patterns rather than isolated events.
Gestalt views of learning have been incorporated into what have come to be labeled cognitive
theories. Two key assumptions underlie this cognitive approach: (1) that the memory system is an
active organized processor of information and (2) that prior knowledge plays an important role in
learning. Cognitive theories look beyond behavior to explain brain-based learning. Cognitivists
consider how human memory works to promote learning. For example, the physiological processes
of sorting and encoding information and events into short term memory and long term memory
are important to educators working under the cognitive theory. The major difference between
gestaltists and behaviorists is the locus of control over the learning activity. For gestaltists, it lies
with the individual learner; for behaviorists, it lies with the environment.
Once memory theories like the Atkinson-Shiffrin memory model and Baddeley's working memory
model were established as a theoretical framework in cognitive psychology, new cognitive
frameworks of learning began to emerge during the 1970s, 80s, and 90s. Today, researchers are
concentrating on topics like cognitive load and information processing theory. These theories of
learning are very useful as they guide instructional design. Aspects of cognitivism can be found in
learning how to learn, social role acquisition, intelligence, learning, and memory as related to age.
Constructivism
Constructivism views learning as a process in which the learner actively constructs or builds new
ideas or concepts based upon current and past knowledge or experience. In other words, "learning
involves constructing one's own knowledge from one's own experiences." Constructivist learning,
therefore, is a very personal endeavor, whereby internalized concepts, rules, and general principles
may consequently be applied in a practical real-world context. This is also known as social
constructivism (see social constructivism). Social constructivists posit that knowledge is constructed
when individuals engage socially in talk and activity about shared problems or tasks. Learning is seen
as the process by which individuals are introduced to a culture by more skilled members"(Driver et
al., 1994) Constructivism itself has many variations, such as Active learning, discovery learning, and
knowledge building. Regardless of the variety, constructivism promotes a student's free exploration
within a given framework or structure.[citation needed]The teacher acts as a facilitator who
encourages students to discover principles for themselves and to construct knowledge by working to
solve realistic problems. Aspects of constructivism can be found in self-directed learning,
transformational learning, experiential learning, situated cognition, and reflective practice.
Informal and post-modern theories
Informal theories of education may attempt to break down the learning process in pursuit of
practicality. One of these deals with whether learning should take place as a building of concepts
toward an overall idea, or the understanding of the overall idea with the details filled in later. Critics
believe that trying to teach an overall idea without details (facts) is like trying to build a masonry
structure without bricks.

Other concerns are the origins of the drive for learning. Some argue that learning is primarily selfregulated and that the ideal learning situation is one dissimilar to the modern classroom. Critics
argue that students learning in isolation fail.
Other learning theories
Other learning theories have also been developed for more specific purposes than general learning
theories. For example, andragogy is the art and science to help adults learn.
Connectivism (learning theory)
Connectivism, "a learning theory for the digital age," has been developed by George Siemens and
Stephen Downes based on their analysis of the limitations of behaviourism, cognitivism and
Page | 79

constructivism to explain the effect technology has had on how we live, how we communicate, and
how we learn. Donald G. Perrin, Executive Editor of the International Journal of Instructional
Technology and Distance Learning says the theory "combines relevant elements of many learning
theories, social structures, and technology to create a powerful theoretical construct for learning in
the digital age."
One aspect of connectivism is the use of a network with nodes and connections as a central
metaphor for learning. In this metaphor, a node is anything that can be connected to another
node: information, data, feelings, images. Learning is the process of creating connections and
developing a network. Not all connections are of equal strength in this metaphor; in fact, many
connections may be quite weak.
Connectivism
Connectivism is the integration of principles explored by chaos, network, and complexity and selforganization theories. Learning is a process that occurs within nebulous environments of shifting core
elements not entirely under the control of the individual. Learning (defined as actionable
knowledge) can reside outside of ourselves (within an organization or a database), is focused on
connecting specialized information sets, and the connections that enable us to learn more are more
important than our current state of knowing. Connectivism is driven by the understanding that
decisions are based on rapidly altering foundations. New information is continually being acquired.
The ability to draw distinctions between important and unimportant information is vital. The ability
to recognize when new information alters the landscape based on decisions made yesterday is also
critical.
In other words, "know-how" and "know-what" are being supplemented with "know-where" (the
understanding of where to find the knowledge when it is needed), and meta-learning is becoming
just as important as the learning itself.
Principles of connectivism
Learning and knowledge rests in diversity of opinions.
Learning is a process of connecting specialized nodes or information sources.
Learning may reside in non-human appliances.
Capacity to know more is more critical than what is currently known
Nurturing and maintaining connections is needed to facilitate continual learning.
Ability to see connections between fields, ideas, and concepts is a core skill.
Currency (accurate, up-to-date knowledge) is the intent of all connectivist learning activities.
Decision-making is itself a learning process. Choosing what to learn and the meaning of incoming
information is seen through the lens of a shifting reality. While there is a right answer now, it may be
wrong tomorrow due to alterations in the information climate affecting the decision.
Connectivism in online learning
Dr. Mohamed Ally at Athabasca University supports connectivism as a more appropriate learning
theory for online learning than older theories such as behaviorism, cognitivism, and constructivism.
This position rests on the idea that the world has changed and become more networked, so learning
theories developed prior to these global changes are less relevant. However, Ally argues that, "What
is needed is not a new stand-alone theory for the digital age, but a model that integrates the
different theories to guide the design of online learning materials."
Connectivist teaching methods

Page | 80

Summing up connectivist teaching and learning, Downes states: "to teach is to model and
demonstrate, to learn is to practice and reflect."
In 2008, Siemens and Downes taught a course called "Connectivism and Connective Knowledge"
which both taught connectivism as the content and modeled it as a teaching method. The course was
free and open to anyone who wished to participate, with over 2000 people worldwide signing up.
The phrase "Massively Open Online Course" was coined to describe this open model. All course
content was available through RSS feeds, and learners could participate with their choice of tools:
threaded discussions in Moodle, blog posts, Second Life, and synchronous online meetings. The
course was again repeated in 2009.
Criticisms of connectivism
Connectivism has been met with criticism on several fronts. Pln Verhagen has argued that
connectivism is not a learning theory, but rather is a "pedagogical view." Verhagen says that learning
theories should deal with the instructional level (how people learn) but that connectivism addresses
the curriculum level (what is learned and why it is learned). Bill Kerr, another critic of connectivism,
believes that, although technology does affect learning environments, existing learning theories are
sufficient.
It has also been noted that connectivism can be seen as an off-branch to constructivism called social
constructivism.[who?]
Multimedia learning theory focuses on principles for the effective use of multimedia in learning.

The Sudbury Model learning theory adduces that learning is a process you do, not a process that is
done to you. This theory states that there are many ways to learn without the intervention of a
teacher.

Language Transfer
Language transfer (also known as L1 interference, linguistic interference, and cross meaning) refers
to speakers or writers applying knowledge from their native language to a second language. It is most
commonly discussed in the context of English language learning and teaching, but it can occur in any
situation when someone does not have a native-level command of a language, as when translating
into a second language.
Positive and negative transfer
Blackboard used in class at Harvard shows students' efforts at placing the and acute accent diacritic
used in Spanish orthography.
When the relevant unit or structure of both languages is the same, linguistic interference can result
in correct language production called positive transfer "correct" meaning in line with most native
speakers' notions of acceptability. An example is the use of cognates. Note, however, that language
interference is most often discussed as a source of errors known as negative transfer. Negative
transfer occurs when speakers and writers transfer items and structures that are not the same in
both languages. Within the theory of contrastive analysis (the systematic study of a pair of languages
with a view to identifying their structural differences and similarities), the greater the differences
between the two languages, the more negative transfer can be expected.
The results of positive transfer go largely unnoticed, and thus are less often discussed. Nonetheless,
such results can have a large effect. Generally speaking, the more similar the two languages are, the
more the learner is aware of the relation between them, the more positive transfer will occur. For
Page | 81

example, an Anglophone learner of German may correctly guess an item of German vocabulary from
its English counterpart, but word order and collocation are more likely to differ, as will connotations.
Such an approach has the disadvantage of making the learner more subject to the influence of "false
friends" (false cognates).
Conscious and unconscious transfer
Transfer may be conscious or unconscious. Consciously, learners or unskilled translators may
sometimes guess when producing speech or text in a second language because they have not learned
or have forgotten its proper usage. Unconsciously, they may not realize that the structures and
internal rules of the languages in question are different. Such users could also be aware of both the
structures and internal rules, yet be insufficiently skilled to put them into practice, and consequently
often fall back on their first language.
Multiple acquired languages
Transfer can also occur between acquired languages. In a situation where French is a second
language and Spanish a third, an anglophone learner, for example, may assume that a structure or
internal rule from French also applies to Spanish.
Language transfer produces distinctive forms of learner English, depending on the speakers first
language. Some examples, labeled with a blend of the names of the two languages in question, are:
Similar interference effects, of course, also involve languages other than English, such as French and
Spanish (Frespaol), or Catalan and Spanish (Catanyol).
These examples could be multiplied endlessly to reflect the linguistic interactions of speakers of the
thousands of existing or extinct languages.
Such interfered-language names are often also used informally to denote instances of codeswitching, code-mixing, borrowing (using loan words).
Broader effects of language transfer
With sustained or intense contact between native and non-native speakers, the results of language
transfer in the non-native speakers can extend to and affect the speech production of the nativespeaking community. For example, in North America, speakers of English whose first language is
Spanish or French may have a certain influence on native English speakers' use of language when the
native speakers are in the minority. Locations where this phenomenon occurs frequently include
Qubec, Canada, and predominantly Spanish-speaking regions in the U.S. For details on the latter,
see the map of the hispanophone world and the list of U.S. communities with Hispanic majority
populations.

Lateralization of brain function


A longitudinal fissure separates the human brain into two distinct cerebral hemispheres, connected
by the corpus callosum. The sides resemble each other and each hemisphere's structure is generally
mirrored by the other side. Yet despite the strong similarities, the functions of each cortical
hemisphere are different.
Popular psychology tends to make broad and sometimes pseudoscientific generalizations about
certain functions (e.g. logic, creativity) being lateral, that is, located in either the right or the left side
of the brain. Researchers often criticize popular psychology for this, because the popular
lateralizations often are distributed across both hemispheres, although mental processing is divided
between them.[citation needed]
Many differences between the hemispheres have been observed, from the gross anatomical level to
differences in dendritic structure or neurotransmitter distribution. For example, the lateral sulcus
Page | 82

generally is longer in the left hemisphere than in the right hemisphere. However, experimental
evidence provides little, if any, consistent support for correlating such structural differences with
functional differences. The extent of specialized brain function by area remains under investigation.
If a specific region of the brain is either injured or destroyed, its functions can sometimes be
assumed by a neighboring region, even in the opposite hemisphere, depending upon the area
damaged and the patient's age. Injury may also interfere with a pathway from one area to another.
In this case, alternative (indirect) connections may exist which can be used to transmit the
information to the target area. Such transmission may not be as efficient as the original pathway.
While functions are lateralized, the lateralizations are functional trends, which differ across
individuals and specific function. Short of having undergone a hemispherectomy (removal of a
cerebral hemisphere), no one is a "left-brain only" or "right-brain only" person.
Brain function lateralization is evident in the phenomena of right- or left-handedness and of right or
left ear preference, but a person's preferred hand is not a clear indication of the location of brain
function. Although 95% of right-handed people have left-hemisphere dominance for language, only
18.8% of left-handed people have right-hemisphere dominance for language function. Additionally,
19.8% of the left-handed have bilateral language functions. Even within various language functions
(e.g., semantics, syntax, prosody), degree (and even hemisphere) of dominance may differ.
Left vs. Right
Linear reasoning and language functions such as grammar and vocabulary often are lateralized to
the left hemisphere of the brain. Dyscalculia is a neurological syndrome associated with damage to
the left temporo-parietal junction. This syndrome is associated with poor numeric manipulation,
poor mental arithmetic skill, and the inability to either understand or apply mathematical concepts.
In contrast, prosodic language functions, such as intonation and accentuation, often are lateralized to
the right hemisphere of the brain. Functions such as the processing of visual and audiological
stimuli, spatial manipulation, facial perception, and artistic ability seem to be functions of the right
hemisphere.
Other integrative functions, including arithmetic, binaural sound localization, and emotions, seem
more bilaterally controlled.
One of the first indications of brain function lateralization resulted from the research of French
physician Pierre Paul Broca, in 1861. His research involved the male patient nicknamed "Tan", who
suffered a speech deficit (aphasia); "tan" was one of the few words he could articulate, hence his
nickname. In Tan's autopsy, Broca determined he had a syphilitic lesion in the left cerebral
hemisphere. This left frontal lobe brain area (Broca's Area) is an important speech production region.
The motor aspects of speech production deficits caused by damage to Brocas Area are known as
Broca's aphasia. In clinical assessment of this aphasia, it is noted that the patient cannot clearly
articulate the language being employed.
Wernicke
German physician Karl Wernicke continued in the vein of Broca's research by studying language
deficits unlike Broca aphasias. Wernicke noted that not every deficit was in speech production; some
were linguistic. He found that damage to the left posterior, superior temporal gyrus (Wernicke's
area) caused language comprehension deficits rather than speech production deficits, a
syndrome known as Wernicke's aphasia.
Advance in imaging technique
These seminal works on hemispheric specialization were done on patients and/or postmortem
brains, raising questions about the potential impact of pathology on the research findings. New
methods permit the in vivo comparison of the hemispheres in healthy subjects. Particularly, magnetic
Page | 83

resonance imaging (MRI) and positron emission tomography (PET) are important because of their
high spatial resolution and ability to image subcortical brain structures.
Handedness and language
Broca's Area and Wernickes Area are linked by a white matter fiber tract, the arcuate fasciculus. This
axonal tract allows the neurons in the two areas to work together in creating vocal language. In more
than 95% of right-handed men, and more than 90% of right-handed women, language and speech
are subserved by the brain's left hemisphere. In left-handed people, the incidence of left-hemisphere
language dominance has been reported as 73% and 61%.
There are ways of determining hemispheric dominance in a person. The Wada Test introduces an
anesthetic to one hemisphere of the brain via one of the two carotid arteries. Once the hemisphere
is anesthetized, a neuropsychological examination is effected to determine dominance for language
production, language comprehension, verbal memory, and visual memory functions. Less invasive
(sometimes costlier) techniques, such as functional magnetic resonance imaging and Transcranial
magnetic stimulation, also are used to determine hemispheric dominance; usage remains
controversial for being experimental.
Movement and sensation
In the 1940s, Canadian neurosurgeon Wilder Penfield and his neurologist colleague Herbert Jasper
developed a technique of brain mapping to help reduce side effects caused by surgery to treat
epilepsy. They stimulated motor and somatosensory cortices of the brain with small electrical
currents to activate discrete brain regions. They found that stimulation of one hemisphere's motor
cortex produces muscle contraction on the opposite side of the body. Furthermore, the functional
map of the motor and sensory cortices is fairly consistent from person to person; Penfield and
Jasper's famous pictures of the motor and sensory homunculi were the result.
Split-brain patients
Research by Michael Gazzaniga and Roger Wolcott Sperry in the 1960s on split-brain patients led to
an even greater understanding of functional laterality. Split-brain patients are patients who have
undergone corpus callosotomy (usually as a treatment for severe epilepsy), a severing of a large part
of the corpus callosum. The corpus callosum connects the two hemispheres of the brain and allows
them to communicate. When these connections are cut, the two halves of the brain have a reduced
capacity to communicate with each other. This led to many interesting behavioral phenomena that
allowed Gazzaniga and Sperry to study the contributions of each hemisphere to various cognitive and
perceptual processes. One of their main findings was that the right hemisphere was capable of
rudimentary language processing, but often has no lexical or grammatical abilities. Eran Zaidel,
however, also studied such patients and found some evidence for the right hemisphere having at
least some syntactic ability.
Pseudoscientific exaggeration of the research
Hines (1987) states that the research on brain lateralization is valid as a research program, though
commercial promoters have applied it to promote subjects and products far outside the implications
of the research. For example, the implications of the research have no bearing on psychological
interventions such as EMDR and neurolinguistic programming (Drenth 2003:53), brain training
equipment, or management training. One explanation for why research on lateralization is so prone
to exaggeration and false application is that the left-right brain dichotomy is an easy-to-understand
notion, which can be oversimplified and misused for promotion in the guise of science. The research
on lateralization of brain functioning is ongoing, and its implications are always tightly delineated,
whereas the pseudoscientific applications are exaggerated, and applied to an extremely wide range
of situations.
Nonhuman brain lateralization
Page | 84

Specialization of the two hemispheres is general in vertebrates including fish, anurans, reptiles, birds
and mammals with the left hemisphere being specialized to categorize information and control
everyday, routine behavior, with the right hemisphere responsible for responses to novel events and
behavior in emergencies including the expression of intense emotions. An example of a routine left
hemisphere behavior is feeding behavior whereas as a right hemisphere is escape from predators
and attacks from conspecifics.

Learning Strategies
In a helpful survey article, Weinstein and Mayer (1986) defined learning strategies (LS) broadly as
"behaviours and thoughts that a learner engages in during learning" which are "intended to
influence the learner's encoding process" (p. 315). Later Mayer (1988) more specifically defined LS
as "behaviours of a learner that are intended to influence how the learner processes information" (p.
11). These early definitions from the educational literature reflect the roots of LS in cognitive science,
with its essential assumptions that human beings process information and that learning involves such
information processing. Clearly, LS are involved in all learning, regardless of the content and context.
LS are thus used in learning and teaching math, science, history, languages and other subjects, both
in classroom settings and more informal learning environments. For insight into the literature on LS
outside of language education, the works of Dansereau (1985) and Weinstein, Goetz and Alexander
(1988) are key, and one recent LS study of note is that of Fuchs, Fuchs, Mathes and Simmons (1997).
In the rest of this paper, the focus will specifically be on language LS in L2/FL learning.

Language Learning Strategies Defined


Within L2/FL education, a number of definitions of LLS have been used by key figures in the field.
Early on, Tarone (1983) defined a LS as "an attempt to develop linguistic and sociolinguistic
competence in the target language -- to incoporate these into one's interlanguage competence" (p.
67). Rubin (1987) later wrote that LS "are strategies which contribute to the development of the
language system which the learner constructs and affect learning directly" (p. 22). In their seminal
study, O'Malley and Chamot (1990) defined LS as "the special thoughts or behaviours that individuals
use to help them comprehend, learn, or retain new information" (p. 1). Finally, building on work in
her book for teachers (Oxford, 1990a), Oxford (1992/1993) provides specific examples of LLS (i.e., "In
learning ESL, Trang watches U.S. TV soap operas, guessing the meaning of new expressions and
predicting what will come next") and this helpful definition:

...language learning strageties -- specific actions, behaviours, steps, or techniques that students
(often intentionally) use to improve their progress in developing L2 skills. These strageties can
facilitate the internalization, storage, retrieval, or use of the new language. Strategies are tools for
the self-directed involvement necessary for developing communicative ability.

From these definitions, a change over time may be noted: from the early focus on the product of LSS
(linguistic or sociolinguistic competence), there is now a greater emphasis on the processes and the
characteristics of LLS. At the same time, we should note that LLS are distinct from learning styles,
which refer more broadly to a learner's "natural, habitual, and preferred way(s) of absorbing,
processing, and retaining new information and skills" (Reid, 1995, p. viii), though there appears to be
an obvious relationship between one's language learning style and his or her usual or preferred
language learning strategies.

Page | 85

What are the Characteristics of LLS?


Although the terminology is not always uniform, with some writers using the terms "learner
strategies" (Wendin & Rubin, 1987), others "learning strategies" (O'Malley & Chamot, 1990; Chamot
& O'Malley, 1994), and still others "language learning strategies" (Oxford, 1990a, 1996), there are a
number of basic characteristics in the generally accepted view of LLS. First, LLS are learner generated;
they are steps taken by language learners. Second, LLS enhance language learning and help develop
language competence, as reflected in the learner's skills in listening, speaking, reading, or writing the
L2 or FL. Third, LLS may be visible (behaviours, steps, techniques, etc.) or unseen (thoughts, mental
processes). Fourth, LLS involve information and memory (vocabulary knowledge, grammar rules,
etc.).
Reading the LLS literature, it is clear that a number of further aspects of LLS are less uniformly
accepted. When discussing LLS, Oxford (1990a) and others such as Wenden and Rubin (1987) note a
desire for control and autonomy of learning on the part of the learner through LLS. Cohen (1990)
insists that only conscious strategies are LLS, and that there must be a choice involved on the part of
the learner. Transfer of a strategy from one language or language skill to another is a related goal of
LLS, as Pearson (1988) and Skehan (1989) have discussed. In her teacher-oriented text, Oxford
summarizes her view of LLS by listing twelve key features. In addition to the characteristics noted
above, she states that LLS:

* allow learners to become more self-directed


* expand the role of language teachers
* are problem-oriented
* involve many aspects, not just the cognitive
* can be taught
* are flexible
* are influenced by a variety of factors.
Beyond this brief outline of LLS characterisitics, a helpful review of the LLS research and some of the
implications of LLS training for second language acquisition may be found in Gu (1996).

Why are LLS Important for L2/FL Learning and Teaching?


Within 'communicative' approaches to language teaching a key goal is for the learner to develop
communicative competence in the target L2/FL, and LLS can help students in doing so. After Canale
and Swain's (1980) influential article recognized the importance of communication strategies as a key
aspect of strategic (and thus communicative) competence, a number of works appeared about
communication strategies in L2/FL teaching2. An important distinction exists, however, between
communication and language learning strategies. Communication strategies are used by speakers
intentionally and consciously in order to cope with difficulties in communicating in a L2/FL (Bialystok,
1990). The term LLS is used more generally for all strategies that L2/FL learners use in learning the
target language, and communication strategies are therefore just one type of LLS. For all L2 teachers
who aim to help develop their students' communicative competence and language learning, then, an
understanding of LLS is crucial. As Oxford (1990a) puts it, LLS "...are especially important for language
learning because they are tools for active, self-directed involvement, which is essential for
developing communicative competence".
In addition to developing students' communicative competence, LLS are important because research
suggests that training students to use LLS can help them become better language learners. Early
research on 'good language learners' by Naiman, Frohlich, Stern, and Todesco (1978, 1996), Rubin
(1975), and Stern (1975) suggested a number of positive strategies that such students employ,
Page | 86

ranging from using an active task approach in and monitoring one's L2/FL performance to listening to
the radio in the L2/FL and speaking with native speakers. A study by O'Malley and Chamot (1990)
also suggests that effective L2/FL learners are aware of the LLS they use and why they use them.
Graham's (1997) work in French further indicates that L2/FL teachers can help students understand
good LLS and should train them to develop and use them.
A caution must also be noted though, because, as Skehan (1989) states, "there is always the
possibility that the 'good' language learning strategies...are also used by bad language learners, but
other reasons cause them to be unsuccessful" (p. 76). In fact Vann and Abraham (1990) found
evidence that suggests that both 'good' and 'unsuccessful' language learners can be active users of
similar LLS, though it is important that they also discovered that their unsuccessful learners
"apparently...lacked...what are often called metacognitive strategies...which would enable them to
assess the task and bring to bear the necessary strategies for its completion" (p. 192). It appears,
then, that a number and range of LLS are important if L2/FL teachers are to assist students both in
learning the L2/FL and in becoming good language learners.

What Kinds of LLS Are There?


There are literally hundreds of different, yet often interrelated, LLS. As Oxford has developed a fairly
detailed list of LLS in her taxonomy, it is useful to summarise it briefly here. First, Oxford (1990b)
distinguishes between direct LLS, "which directly involve the subject matter", i.e. the L2 or FL, and
indirect LLS, which "do not directly involve the subject matter itself, but are essential to language
learning nonetheless" (p. 71). Second, each of these broad kinds of LLS is further divided into LLS
groups. Oxford outlines three main types of direct LLS, for example. Memory strategies "aid in
entering information into long-term memory and retrieving information when needed for
communication". Cognitive LLS "are used for forming and revising internal mental models and
receiving and producing messages in the target language". Compensation strategies "are needed to
overcome any gaps in knowledge of the language" (Oxford, 1990b, p. 71). Oxford (1990a, 1990b) also
describes three types of indirect LLS. Metacognitive strageties "help learners exercise 'executive
control' through planning, arranging, focusing, and evaluating their own learning". Affective LLS
"enable learners to control feelings, motivations, and attitudes related to language learning". Finally,
social strategies "facilitate interaction with others, often in a discourse situation".
A more detailed overview of these six main types of LLS is found in Oxford (1990a, pp. 18-21), where
they are further divided into 19 strategy groups and 62 subsets. Here, by way of example, we will
briefly consider the social LLS that Oxford lists under indirect strategies. Three types of social LLS are
noted in Oxford (1990a): asking questions, co-operating with others, and empathising with others (p.
21). General examples of LLS given in each of these categories are as follows:

Asking questions
1. Asking for clarification or verification
2. Asking for correction
Co-operating with others
1. Co-operating with peers
2. Co-operating with proficient users of the new language
Empathising with others
1. Developing cultural understanding
2. Becoming aware of others' thoughts and feelings
Page | 87

Although these examples are still rather vague, experienced L2/FL teachers may easily think of
specific LLS for each of these categories. In asking questions, for example, students might ask
something specific like "Do you mean...?" or "Did you say that...?" in order to clarify or verify what
they think they have heard or understood. While at first glance this appears to be a relatively
straightforward LLS, in this writer's experience it is one that many EFL students in Japan, for example,
are either unaware of or somewhat hesitant to employ.
What is important to note here is the way LLS are interconnected, both direct and indirect, and the
support they can provide one to the other (see Oxford, 1990a, pp. 14-16). In the above illustration of
social LLS, for example, a student might ask the questions above of his or her peers, thereby 'cooperating with others', and in response to the answer he or she receives the student might develop
some aspect of L2/FL cultural understanding or become more aware of the feelings or thoughts of
fellow students, the teacher, or those in the L2/FL culture. What is learned from this experience
might then be supported when the same student uses a direct, cognitive strategy such as 'practising'
to repeat what he or she has learned or to integrate what was learned into a natural conversation
with someone in the target L2/FL. In this case, the way LLS may be inter-connected becomes very
clear.
2. USING LLS IN THE CLASSROOM
With the above background on LLS and some of the related literature, this section provides an
overview of how LLS and LLS training have been or may be used in the classroom, and briefly
describes a three step approach to implementing LLS training in the L2/FL classroom.
Contexts and Classes for LLS Training
LLS and LLS training may be integrated into a variety of classes for L2/FL students. One type of course
that appears to be becoming more popular, especially in intensive English programmes, is one
focusing on the language learning process itself. In this case, texts such as Ellis and Sinclair's (1989)
Learning to Learn English: A Course in Learner Training or Rubin and Thompson's (1994) How to Be a
More Successful Language Learner might be used in order to help L2/FL learners understand the
language learning process, the nature of language and communication, what language learning
resources are available to them, and what specific LLS they might use in order to improve their own
vocabulary use, grammar knowledge, and L2/FL skills in reading, writing, listening, and speaking.
Perhaps more common are integrated L2/FL courses where these four skills are taught in tandem,
and in these courses those books might be considered as supplementary texts to help learners focus
on the LLS that can help them learn L2/FL skills and the LLS they need to acquire them. In this writer's
experience, still more common is the basic L2/FL listening, speaking, reading, or writing course where
LLS training can enhance and complement the L2/FL teaching and learning. Whatever type of class
you may be focusing on at this point, the three step approach to implementing LLS training in the
classroom outlined below should prove useful.

Linguistic Relativity
The linguistic relativity principle (also known as the Sapir-Whorf Hypothesis) is the idea that the
varying cultural concepts and categories inherent in different languages affect the cognitive
classification of the experienced world in such a way that speakers of different languages think and
behave differently because of it.
The idea that linguistic structure influences the cognition of language users has bearings on the fields
of anthropological linguistics, psychology, psycholinguistics, neurolinguistics, cognitive science,
linguistic anthropology, sociology of language and philosophy of language, and it has been the
subject of extensive studies in all of these fields. The idea of linguistic influences on thought has also
Page | 88

captivated the minds of authors and creative artists inspiring numerous ideas in literature, in the
creation of artificial languages and even forms of therapy such as neurolinguistic programming.
The idea originated in the German national romantic thought of the early 19th century where
language was seen as the expression of the spirit of a nation, as put particularly by Wilhelm von
Humboldt. The idea was embraced by figures in the incipient school of American anthropology such
as Franz Boas and Edward Sapir. Sapir's student Benjamin Lee Whorf added observations of how he
perceived these linguistic differences to have consequences in human cognition and behaviour.
Whorf has since been seen as the primary proponent of the principle of linguistic relativity.
Whorf's insistence on the importance of linguistic relativity as a factor in human cognition attracted
opposition from many sides. Psychologist Eric Lenneberg decided to put Whorf's assumptions and
assertions to the test. He formulated the principle of linguistic relativity as a testable hypothesis and
undertook a series of experiments testing whether traces of linguistic relativity could be determined
in the domain of color perception. In the 1960s the idea of linguistic relativity fell out of favor in the
academical establishment, since the prevalent paradigm in linguistics and anthropology, personified
in Noam Chomsky, stressed the universal nature of human language and cognition. When the 1969
study of Brent Berlin and Paul Kay showed that color terminology is subject to universal semantic
constraints, the Sapir-Whorf hypothesis was seen as completely discredited.
From the late 1980s a new school of linguistic relativity scholars, rooted in the advances within
cognitive and social linguistics, have examined the effects of differences in linguistic categorization
on cognition finding broad support for the hypothesis in experimental contexts. Effects of linguistic
relativity have been shown particularly in the domain of spatial cognition and in the social use of
language, but also in the field of color perception. Recent studies have shown that color perception is
particularly prone to linguistic relativity effects when processed in the left brain hemisphere,
suggesting that this brain half relies more on language than the right one. Currently a balanced view
of linguistic relativity is espoused by most linguists holding that language influences certain kinds of
cognitive processes in non trivial ways but that other processes are better seen as subject to
universal factors. Current research is focused on exploring the ways in which language influences
thought and determining to what extent.
History
The idea that language and thought are intertwined goes back to the classical civilizations, but in the
history of European philosophy the relation was not seen as fundamental. St. Augustine for example
held the view that language was merely labels applied to already existing concepts. Others held the
opinion that language was but a veil covering up the eternal truths hiding them from real human
experience. For Immanuel Kant, language was but one of several tools used by humans to experience
the world. In the late 18th and early 19th century the idea of the existence of different national
characters, or "volksgeisten", of different ethnic groups was the moving force behind the German
school of national romanticism and the beginning ideologies of ethnic nationalism.
In 1820 Wilhelm von Humboldt connected the study of language to the national romanticist program
by proposing the view that language is the very fabric of thought that is that thoughts are produced
as a kind of inner dialog using the same grammar as the thinker's native language. This view was part
of a larger picture in which the world view of an ethnic nation, their "Weltanschauung", was seen as
being faithfully reflected in the grammar of their language. Von Humboldt argued that languages
with an inflectional morphological type, such as German, English and the other Indo-European
languages were the most perfect languages and that accordingly this explained the dominance of
their speakers over the speakers of less perfect languages.
The German scientist Wilhelm von Humboldt declared in 1820:
The diversity of languages is not a diversity of signs and sounds but a diversity of views of the world.

Page | 89

The idea that some languages were naturally superior to others and that the use of primitive
languages maintained their speakers in intellectual poverty was wide spread in the early 20th
century. The American linguist William Dwight Whitney for example actively strove to eradicate the
Native American languages arguing that their speakers were savages and would be better off
abandoning their languages and learning English and adopting a civilized way of life. The first
anthropologist and linguist to challenge this view was Franz Boas who was educated in Germany in
the late 19th century where he received his doctorate in physics. While undertaking geographical
research in northern Canada he became fascinated with the Inuit people and decided to become an
ethnographer. In contrast to Humboldt, Boas always stressed the equal worth of all cultures and
languages, and argued that there was no such thing as primitive languages, but that all languages
were capable of expressing the same content albeit by widely differing means. Boas saw language as
an inseparable part of culture and he was among the first to require of ethnographers to learn the
native language of the culture being studied, and to document verbal culture such as myths and
legends in the original language.
According to Franz Boas:
It does not seem likely [...] that there is any direct relation between the culture of a tribe and the
language they speak, except in so far as the form of the language will be moulded by the state of the
culture, but not in so far as a certain state of the culture is conditioned by the morphological traits of
the language."
Boas' student Edward Sapir reached back to the Humboldtian idea that languages contained the key
to understanding the differing world views of peoples. In his writings he espoused the viewpoint that
because of the staggering differences in the grammatical systems of languages no two languages
were ever similar enough to allow for perfect translation between them. Sapir also thought because
language represented reality differently, it followed that the speakers of different languages would
perceive reality differently. According to Edward Sapir:
No two languages are ever sufficiently similar to be considered as representing the same social
reality. The worlds in which different societies live are distinct worlds, not merely the same world
with different labels attached.
While Sapir never made a point of studying how languages affected the thought processes of their
speakers the notion of linguistic relativity lay inherent in his basic understanding of language, and it
would be taken up by his student Benjamin Lee Whorf.
Benjamin Lee Whorf
More than any other linguist, Benjamin Lee Whorf has become associated with what he himself
called "the principle of linguistic relativity". Instead of merely assuming that language influences the
thought and behavior of its speakers (after Humboldt and Sapir) he looked at Native American
languages and attempted to account for the ways in which differences in grammatical systems and
language use affected the way their speakers perceived the world. Whorf's detractors such as Eric
Lenneberg, Noam Chomsky and Steven Pinker have criticized him for not being sufficiently clear in
his formulation of how he meant languages influences thought, and for not providing actual proof of
his assumptions. Most of his arguments were in the form of examples that were anecdotal or
speculative in nature, and functioned as attempts to show how "exotic" grammatical traits were
connected to what was apparently equally exotic worlds of thought. In Whorf's words:
We dissect nature along lines laid down by our native language. The categories and types that we
isolate from the world of phenomena we do not find there because they stare every observer in the
face; on the contrary, the world is presented in a kaleidoscope flux of impressions which has to be
organized by our mindsand this means largely by the linguistic systems of our minds. We cut
nature up, organize it into concepts, and ascribe significances as we do, largely because we are
parties to an agreement to organize it in this wayan agreement that holds throughout our speech
Page | 90

community and is codified in the patterns of our language [...] all observers are not led by the same
physical evidence to the same picture of the universe, unless their linguistic backgrounds are similar,
or can in some way be calibrated.
Among Whorf's well known examples of linguistic relativity are examples of instances where an
indigenous language has several terms for a concept that is only described with one word in English
and other European languages (Whorf used the acronym SAE "Standard Average European" to allude
to the rather similar grammatical structures of the well-studied European languages in contrast to
the much more diverse less studied languages). One of Whorf's examples of this was the supposedly
many words for 'snow' in the Inuit language, which has later been shown to be a misrepresentation
but also for example how the Hopi language describes water with two different words for drinking
water in a container versus a natural body of water. These examples of polysemia served the double
purpose of showing that indigenous languages sometimes made more fine grained semantic
distinctions than European languages and that direct translation between two languages, even of
seemingly basic concepts like snow or water, is not always possible.
Another example in which Whorf attempted to show that language use affects behavior came from
his experience in his day job as a chemical engineer working for an insurance company as a fire
inspector. On inspecting a chemical plant he once observed that the plant had two storage rooms for
gasoline barrels, one for the full barrels and one for the empty ones. He further noticed that while no
employees smoked cigarettes in the room for full barrels no-one minded smoking in the room with
empty barrels, although this was potentially much more dangerous due to the highly flammable
vapors that still existed in the barrels. He concluded that the use of the word empty in connection to
the barrels had led the workers to unconsciously regarding them as harmless, although consciously
they were probably aware of the risk of explosion from the vapors. This example was later criticized
by Lenneberg as not actually demonstrating the causality between the use of the word empty and
the action of smoking, but instead being an example of Circular reasoning. Steven Pinker in the
Language Instinct ridiculed this example, claiming that this was a failing of human sight rather than
language.
Whorf's most elaborate argument for the existence of linguistic relativity regarded what he believed
to be a fundamental difference in the understanding of time as a conceptual category among the
Hopi. He argued that in contrast to English and other SAE languages, the Hopi language does not
treat the flow of time as a row of distinct, countable instances, like "three days" or "five years" but
rather as a single process and consequentially it does not have nouns referring to units of time. He
proposed that this view of time was fundamental in all aspects of Hopi culture and explained certain
Hopi behavioral patterns.
Whorf died in 1941 at age 44 and left behind him a number of unpublished papers. His line of
thought was continued by linguists and anthropologists such as Harry Hoijer and Dorothy D. Lee who
both continued investigations into the effect of language on habitual thought, and George L. Trager
who prepared a number of Whorf's left-behind papers for publishing. Hoijer was also the first to use
the term "Sapir-Whorf hypothesis" about the complex of ideas about linguistic relativity expressed in
the work of those two linguists. The most important event for the dissemination of Whorf's ideas to a
larger public was the publication in 1956 of his major writings on the topic of linguistic relativity in a
single volume titled "Language, Thought and Reality" edited by J. B. Carrol.
Eric Lenneberg
The psychologist Eric Lenneberg was the first to formulate what Whorf had called the principle of
linguistic relativity as a scientifically testable hypothesis. In 1953 Lenneberg published a detailed
criticism of the line of thought that had been fundamental for Sapir and Whorf. He criticised Whorf's
examples from an objectivist view of language holding that languages are principally meant to
represent events in the real world and that even though different languages express these ideas in
different ways, the meanings of such expressions and therefore the thoughts of the speaker are
Page | 91

equivalent. He argued that when Whorf was describing in English how a Hopi speaker's view of time
was different, he was in fact translating the Hopi concept into English and therefore disproving the
existence of linguistic relativity. He did not however address the fact that Whorf was not principally
concerned with translatability, but rather with how the habitual use of language influences habitual
behavior. Whorf's point was that while English speakers may be able to understand how a Hopi
speaker thinks, they are not actually able to think in that way.
But his main criticism of Whorf's works was that he had never actually shown the causality between
a linguistic phenomenon and a phenomenon in the realm of thought or behavior, but merely
assumed it to be there. Lenneberg proposed that in order to prove such a causality one would have
to be able to directly correlate linguistic phenomena with behavior. He took up the task of proving or
disproving the existence of linguistic relativity experimentally.
Since neither Sapir nor Whorf had ever stated an actual hypothesis, Lenneberg formulated one based
on a condensation of the different expressions of the notion of linguistic relativity in their works. He
found it necessary to formulate the hypothesis as two basic formulations which he called the "weak"
and the "strong" formulation respectively:
1. Structural differences between language systems will, in general, be paralleled by nonlinguistic
cognitive differences, of an unspecified sort, in the native speakers of the language.
2. The structure of anyone's native language strongly influences or fully determines the worldview he
will acquire as he learns the language.
Since Lenneberg believed that the objective reality denotated by language was the same for speakers
of all language he decided to test how different languages codified the same message differently and
whether differences in codification could be proven to affect their behaviour.
He designed a number of experiments involving the codification of colors. In his first experiment he
investigated whether it was easier for speakers of English to remember color shades for which they
had a specific name than to remember colours that were not as easily definable by words. This
allowed him to correlate the linguistic categorization directly to a non-linguistic task, that of
recognizing and remembering colours. In a later experiment in which speakers of two languages
(English and Zuni) which categorize colors differently were asked to perform tasks of colour
recognition, in this way it could be determined whether the differing color categories of the two
speakers would determine their ability to recognize nuances within color categories. Lenneberg in
fact found that Zuni speakers who classify green and blue together as a single category did have
troubles recognizing and remembering nuances within the green/blue category. Lenneberg's study
became the beginning of a tradition of investigation of the linguistic relativity through color
terminology. Lenneberg's two formulations of the hypothesis became widely known and attributed
to Whorf and Sapir while in fact the second formulation, verging on linguistic determinism, was never
advanced by either of them.

The universalist period


Lenneberg was also among the cognitive scientists to begin development of the Universalist theory
of language which was finally formulated by Noam Chomsky in the form of Universal Grammar,
effectively arguing that all languages share the same underlying structure. The Chomskyan school
also holds the belief that linguistic structures are largely innate and that what are perceived as
differences between specific languages the knowledge acquired by learning a language are
merely surface phenomena and do not affect cognitive processes that are universal to all human
beings. This theory became the dominant paradigm in American linguistics from the 1960s through
the 1980s and the notion of linguistic relativity fell out of favor and became even the object of
ridicule.

Page | 92

An example of the influence of universalist theory in the 1960s is the studies by Brent Berlin and Paul
Kay who continued Lennebergs research in color terminology. Berlin and Kay studied color
terminology formation in languages and showed clear universal trends in color naming. For example
they found that even though languages have different color terminologies they generally recognize
certain hues as more focal than others, and they showed that in languages with few color terms it is
predictable from the number of terms which hues are chosen as focal colors, for example languages
with only three color terms always have focus their categories in the focal colors black, white and
red. The fact that what had been believed to be random differences between color naming in
different languages could be shown to follow universal patterns was seen as a powerful argument
against linguistic relativity. Berlin and Kay's research has since been criticized by relativists such as
John A. Lucy who has argued that Berlin and Kay's conclusions were skewed by their insistence that
color terms should encode only color information. This, Lucy argues, made them blind for the
instances in which so-called color terms did in fact provide other information that might be
considered examples of linguistic relativity. For more information regarding the universalism and
relativism of color terms, see Universalism and relativism of color terminology.
Other universalist researchers dedicated themselves to dispelling other notions of linguistic relativity,
often attacking specific points and examples given by Whorf. For example, Ekkehart Malotki's
monumental study of time expressions in Hopi presented many examples that challenged Whorf's
interpretation of Hopi language and culture as being "timeless.
Today many followers of the universalist school of thought still oppose the idea of linguistic relativity.
For example, Steven Pinker in his book The Language Instinct argues that thought is independent of
language, and that language is itself meaningless in any fundamental way to human thought, and
that human beings do not even think in "natural" language, i.e. any language that we actually
communicate in; rather, we think in a meta-language, preceding any natural language, called
"mentalese". Pinker attacks what he calls "Whorf's radical position," declaring, "the more you
examine Whorf's arguments, the less sense they make."
Pinker and other universalist opponents of the linguistic relativity hypothesis have been accused by
relativists of misrepresenting Whorf's views and arguing against strawmen put up by themselves.
Cognitive linguistics
In the late 1980s and early 1990s advances in cognitive psychology and cognitive linguistics renewed
interest in the Sapir-Whorf hypothesis. One of those who adopted a more Whorfian approach was
George Lakoff. He argued that language is often used metaphorically and that different languages
use different cultural metaphors that reveal something about how speakers of that language think.
For example English employs metaphors likening time with money, whereas other languages may not
talk about time in that fashion. Other linguistic metaphors may be common to most languages
because they are based on general human experience, this could be for example metaphors likening
up with good and bad with down. Lakoff also argues that metaphor plays an important part in
political debates where it matters whether one is arguing in favor of the "right to life" or against the
"right to choose"; whether one is discussing "illegal aliens" or "undocumented workers".
In his book Women, Fire and Dangerous things: What categories reveal about the mind (1987) Lakoff
reappraised the hypothesis of linguistic relativity and especially Whorf's views about how linguistic
categorization reflect and/or influence mental categories. He concluded that the debate on linguistic
relativity had been confused and resultingly fruitless. He identified four parameters on which
researchers differed in their opinions about what constitutes linguistic relativity. One parameter is
the degree and depth of linguistic relativity. Some scholars believe that a few examples of superficial
differences in language and associated behaviour are enough to demonstrate the existence of
linguistic relativity, while other contend that only deep differences that permeate the linguistic and
cultural system suffice as proof. A second parameter is whether conceptual systems are to be seen as
absolute or whether they can be expanded or exchanged during the life time of a human being. A
Page | 93

third parameter is whether translatability is accepted as a proof of similarity or difference between


concept systems or whether it is rather the actual habitual use of linguistic expressions that is to be
examined. A fourth parameter is whether to view the locus of linguistic relativity as being in the
language or in the mind. Lakoff concluded that since many of Whorf's critics had criticised him using
definitions of linguistic relativity that Whorf did not himself use, their criticisms were often
ineffective.
The publication of the 1996 anthology Rethinking linguistic relativity edited by sociolinguist John J.
Gumperz and psycholinguist Stephen C. Levinson marked the entrance to a new period of linguistic
relativity studies and a new way of defining the concept that focused on cognitive as well as social
aspects of linguistic relativity. The book included studies by cognitive linguists sympathetic to the
hypothesis as well as some working in the opposing universalists tradition. In this volume cognitive
and social scientists laid out a new paradigm for investigations in linguistic relativity. Levinson
presented research results documenting rather significant linguistic relativity effects in the linguistic
conceptualization of spatial categories between different languages. Two separate studies by Melissa
Bowerman and Dan I. Slobin treated the role of language in cognitive processes. Bowerman showed
that certain cognitive processes did not use language to any significant extent and therefore could
not be subject to effects of linguistc relativity. Slobin on the other hand, described another kind of
cognitive process that he named "thinking for speaking" the kind of processes in which
perceptional data and other kinds of prelinguistic cognition are translated into linguistic terms for the
purpose of communicating them to others. This, Slobin argues, are the kinds of cognitive process that
are at the root of linguistic relativity.
Present status
Current researchers accept that language influences thought, but in more limited ways than the
broadest early claims. Exploring these parameters has sparked a novel research that increases both
scope and precision of prior examinations. Current studies of linguistic relativity are neither marked
by the naivistic approach to exotic linguistic structures and their often merely presumed effect on
thought that marked the early period, nor are they ridiculed and discouraged as in the universalist
period. Instead scholars studying linguistic relativity are cognitive scientists interested in examining
the interface between thought, language and culture, and describing the degree and kind of
interrelatedness rather than trying to prove or disprove a theory. Usually, following the tradition of
Lenneberg, they use experimental data to back up their conclusions. A popular contemporary
researcher in this tradition is Stanford cognitive scientist Lera Boroditsky.
Empirical research
John Lucy has identified three main strands of research into linguistic relativity. The first is what he
calls the structure centered approach. This approach starts with observing a structural peculiarity in a
language and goes on to examine its possible ramifications for thought and behavior. The first
example of this kind of research is Whorf's observation of discrepancies between the grammar of
time expressions in Hopi and English. More recent research in this vein is the research made by John
A Lucy describing how usage of the categories of grammatical number and of numeral classifiers in
the Mayan language Yucatec result in Mayan speakers classifying objects according to material rather
than to shape as preferred by speakers of English.[24]
The second strand of research is the "domain centered" approach, in which a semantic domain is
chosen and compared across linguistic and cultural groups for correlations between linguistic
encoding and behavior. The main strand of domain centered research has been the research on color
terminology, although this domain according to Lucy and admitted by color terminology researchers
such as Paul Kay, is not optimal for studying linguistic relativity, because color perception, unlike
other semantic domains, is known to be hard wired into the neural system and as such subject to
more universal restrictions than other semantic domains. Since the tradition of research on color
terminology is by far the largest area of research into linguistic relativity it is described below in its
Page | 94

own section. Another semantic domain which has proven fruitful for studies of linguistic relativity is
the domain of space.[25] Spatial categories vary greatly between languages and recent research has
shown that speakers rely on the linguistic conceptualization of space in performing many quotidian
tasks. Research carried out by Stephen C Levinson and other cognitive scientists from the Max Planck
Institute for Psycholinguistics has reported three basic kinds of spatial categorization and while many
languages use combinations of them some languages exhibit only one kind of spatial categorization
and corresponding differences in behavior. For example the Australian language Guugu Yimithirr only
uses absolute directions when describing spatial relations the position of everything is described
by using the cardinal directions. A speaker of Guugu yimithirr will define a person as being "north of
the house", while a speaker of English may say that he is "in front of the house" or "to the left of the
house" depending on the speaker's point of view. This difference makes Guugu yimithirr speakers
better at performing some kinds of tasks, such as finding and describing locations in open terrain,
whereas English speakers perform better in tasks regarding the positioning of objects relative to the
speaker (For example telling someone to set the table putting forks to the right of the plate and
knives to the left would be extremely difficult in Guugu yimithirr).
The third strand of research is the "behavior centered" approach which starts by observing different
behavior between linguistic groups and then proceeds to search for possible causes for that behavior
in the linguistic system. This kind of approach was used by Whorf when he attributed the occurrence
of fires at a chemical plant to the workers' use of the word empty to describe the barrels containing
only explosive vapors. One study in this line of research has been conducted by Bloom who noticed
that speakers of Chinese had unexpected difficulties answering counter-factual questions posed to
them in a questionnaire. After a study he concluded that this was related to the way in which
counter-factuality is marked grammatically in the Chinese language. Another line of study by Frode
Strmnes examined why Finnish factories had a higher occurrence of work related accidents than
similar Swedish ones. He concluded that cognitive differences between the grammatical usage of
Swedish prepositions and Finnish cases could have caused Swedish factories to pay more attention to
the work process where Finnish factory organizers paid more attention to the individual worker.
Other research of importance to the study of linguistic relativity has been Daniel Everetts studies of
the Pirah people of the Brazilian Amazon. Everett observed several peculiarities in Pirah culture
that corresponded with linguistically rare features. The Pirah for example have neither numbers nor
color terms in the way those are normally defined, and correspondingly they don't count or classify
colors in the way other cultures do. Furthermore when Everett tried to instruct them in basic
mathematics they proved unresponsive. Everett did not draw the conclusion that it was the lack of
numbers in their language that prevented them from grasping mathematics, but instead concluded
that the Pirah had a cultural ideology that made them extremely reluctant to adopt new cultural
traits, and that this cultural ideology was also the reason that certain linguistic features that were
otherwise believed to be universal did not exist in their language. Critics have argued that if the test
subjects are unable to count for some other reason (perhaps because they are nomadic
hunter/gatherers with nothing to count and hence no need to practice doing so) then one should not
expect their language to have words for such numbers. That is, it is the lack of need which explains
both the lack of counting ability and the lack of corresponding vocabulary.

Color terminology research


There are two formal sides to the color debate, the universalist and the relativist. The universalist
side claims that our biology is one and the same and so the development of color terminology has
absolute universal constraints, while the relativist side claims that the variability of color terms crosslinguistically points to more culture-specific phenomena. Because color exhibits both biological and
linguistic aspects, it has become a largely studied domain that addresses the linguistic relativity
question between language and thought.
Page | 95

The color debate was made popular in large part due to Brent Berlin & Paul Kays famous 1969 study
and their subsequent publishing of Basic Color Terms: Their Universality and Evolution. Although,
most of the work on color terminology has been done since Berlin & Kays famous study, other
research predates it, including the mid-nineteenth century work of William Gladstone and Lazarus
Geiger which also predates the Sapir-Whorf hypothesis, as well as the work of Eric Lenneberg &
Roger Brown in 1950s and 1960s.
Linguistic relativity and artificial languages
The Sapir-Whorf hypothesis influenced the development and standardization of Interlingua during
the first half of the 20th Century, but this was largely due to Sapir's direct involvement. In 1955, Dr.
James Cooke Brown constructed and artificial language called Loglan based on the principles of
predicate logic in order to test the hypothesis. A reformed variant of the language, Lojban, has an
active community of enthusiasts. However, no formal tests have been done on either.
Kenneth E. Iverson, the originator of the APL programming language, believed that the SapirWhorf
hypothesis applied to computer languages (without actually mentioning the hypothesis by name). His
Turing award lecture, "Notation as a tool of thought", was devoted to this theme, arguing that more
powerful notations aided thinking about computer algorithms. The essays of Paul Graham explore
similar themes, such as a conceptual hierarchy of computer languages, with more expressive and
succinct languages at the top.

Page | 96

Manner of Articulation
The action of the speech organs involved in producing a particular consonant. A consonant is
produced by narrowing the vocal tract at
some point along its length. The particular speech organs chosen to make the constriction represent
the place of articulation, but, even at a single place, it is usually possible to make several different
kinds of constriction. The type of constriction made in a particular instance is the manner of
articulation.
There are several types of manner. In a plosive (like [b] or [k]), a complete closure is made, blocking
off the airflow, and the closure is released suddenly. In an affricate (like [tS] or [ts]), a complete
closure is made and then released gradually, with friction noise. In a fricative (like [f] or [z]), there is
no complete closure, but air is forced through a tiny opening, producing friction noise. These three
types are collectively called obstruents, because the airflow is strongly obstructed. The remaining
types are collectively called sonorants. In a nasal n (like [m] or [n]), a complete closure is made in the
mouth, but the velum is lowered, so that air flows out through the nose. In an approximant (like [w]
or most types of English /r/), the air is allowed to flow through a relatively large opening, and no
friction noise is produced. (At the phonetic level, such consonants are strictly vowels, but they
pattern in languages like consonants.) In a flap (like the [_] of many languages of India), the tongue is
flipped rapidly from one place to another, briefly striking something else as it moves. A tap (like
Spanish [Q] in pero but) is similar except that the tongue finishes where it started. (Some books do
not distinguish between flaps and taps, but it is preferable to do so.) In a trill (like Spanish [r] in perro
dog), the air forced through a smallish opening forces the tongue to vibrate. All these are examples
of median consonants, in which all airflow is through the centre-line of the mouth. However, it is also
possible to block off the centre-line and force the air to flow through one or both sides of the mouth;
such a consonant is lateral. We can produce a lateral affricate (like the [t ] in Nahuatl, the Aztec
language of Mexico), a lateral fricative (like the [ ] of Welsh Llanelli), or a lateral approximant
(commonly just called a lateral) (like English [l]).

Mean Length of Utterance


Mean Length of Utterance (or MLU) is a measure of linguistic productivity in children. It is
traditionally calculated by collecting 100 utterances spoken by a child and dividing the number of
morphemes by the number of utterances. A higher MLU is taken to indicate a higher level of
language proficiency.
A study by Bishop and Adams (1990) suggests that MLU at 4.5 is a good predictor of reading ability at
age 8. Nonetheless MLU is considered controversial, and should not be used as the only diagnostic
measure of language proficiency in children.

Markedness
Markedness is a linguistic concept that developed out of the Prague School. A marked form is a
non-basic or less natural form. An unmarked form is a basic, default form. For example, lion is the
unmarked choice in English it could refer to a male or female lion. But lioness is marked because
it can refer only to females. The unmarked forms serve as general terms: e.g. brotherhood of man
is sometimes used to refer to all people, both men and women, while sisterhood refers only to
women. The form of a word that is conventionally chosen to be the lemma form is typically the form
that is the least marked.

Page | 97

Markedness originally developed from phonology where phonetic symbols were literally marked
to indicate additional features, such as voicing, nasalization or roundedness. Markedness is still an
influential concept in current phonological theory. In Optimality Theory many of the central
arguments concerning constraints and ordering have to do with the markedness of a form.
The concept of markedness has been extended to other areas of grammar as well, such as
morphology, syntax and semantics. Markedness is a very fuzzy notion, especially if it is not made
clear whether something is marked phonetically, phonologically, morphologically, syntactically, or
semantically. There are many sets of varied criteria to determine which forms are considered more
marked and which are not: Some quantify markedness in terms of statistical frequency of use, others
define it in psycholinguistic terms, yet others use merely their own intuitions on the subject.
An important fact is that what is more highly marked on one linguistic level may be less highly
marked on another. For example: "ant" is less marked than "ants" on the morphological level, but on
the semantic and frequency levels it may be more marked since ants are more often encountered
many at once than one at a time. The latter fact is reflected in certain Frisian words' plural and
singular forms : In Frisian, nouns with irregular singular-plural stem variations are undergoing
regularization. Usually this means that the plural is reformed to be a regular form of the singular:
OLD PARADIGM: "koal" (coal), "kwallen" (coals) REGULARIZED FORMS: "Koal" (coal), "Koalen"
(coals).
However, a number of words instead reform the singular to be a regular form of the plural:
OLD PARADIGM: "earm" (arm), "jermen" (arms) REGULARIZED FORMS: "jerm" (arm), "jermen"
(arms)
The common denominator of the nouns that regularize the singular to match the plural is that they
are terms that more often occur in pairs or in groups; they are said to be (semantically, not
morphologically) locally unmarked in the plural.
The property which distinguishes less neutral linguistic forms from
competing ones which are more neutral. Though the concept is
older, the term markedness was introduced by the European linguists of the Prague School in the
1920s, and it is now regarded as linguistically central.
Markedness is a very broad notion applying at all levels of analysis. Generally speaking, a marked
form is any linguistic form which is less usual or less neutral than some other form the unmarked
form from any of a number of points of view. (Note that no linguistic item is absolutely neutral;
neutrality is a relative concept.) A marked form may be distinguished from an unmarked one by the
presence of additional linguistic material, by the presence of additional nuances of meaning, by
greater rarity in a particular language or in languages generally, or in several other ways.
For example, voiceless vowels and voiceless laterals are marked with respect to voiced ones, since
the voiceless ones are far rarer than the voiced ones in the worlds languages and since the voiceless
ones are generally found only in languages that also have voiced ones, while most languages have
only the voiced ones. The affricate [pf], as in German Pflaume plum, is marked with respect to both
[p] and [f], since the last two are very frequent in languages, while [pf] is exceedingly rare. mEnglish
lioness is marked with respect to lion, since it contains additional morphological material and since it
is of less general applicability. English brethren is marked with respect to brothers, since the first
(historically, simply the plural form of brother) is restricted to certain special contexts, and bunny is
marked with respect to rabbit, since the first carries additional emotive meaning absent from the
second. A passive sentence like Janet was arrested by the police is marked with respect to the active
The police arrested Janet, since the passive contains more material, has a more complex structure,
and is rarer than the active.
Page | 98

The several criteria for markedness may not always coincide. For example, in some Pacific languages,
passive sentences are far commoner than active ones; hence the passive ones, although marked
from the point of view of grammar, are unmarked from the point of view of discourse. Markedness
values can also change over time: the formal Latinate phrase prior to was once highly marked with
respect to native English before, but for very many speakers prior to has now become the ordinary,
and hence unmarked, form, as in prior to the war.

Meaningful vs. Rote Learning


According to Ausubel, "the most important single factor influencing learning is what the learner
already knows" (Novak, 1998, p.71). Relationships between concepts are formed when two
concepts overlap on some level. As learning progresses, this network of concepts and relationships
becomes increasingly complex. Ausubel compares meaningful learning to rote learning, which
refers to when a student simply memorizes information without relating that information to
previously learned knowledge. As a result, new information is easily forgotten and not readily
applied to problem-solving situations because it was not connected with concepts already learned.
However, meaningful learning requires more effort, as the learner must choose to relate new
information to relevant knowledge that already exists in the learners cognitive structure. This
requires more effort initially, however after knowledge frameworks are developed, definitions and
the meanings for concepts become easier to acquire. Further, concepts learned meaningfully are
retained much longer, sometimes for a lifetime.
Rote learning, common in many schools and universities today, is shown to be of little use for
achieving the goals of individuals and society in a time when creative production of new knowledge is
in heightened demand. Knowledge creation is viewed as a special form of meaningful learning.
Three basic requirements for meaningful learning include: a learners relevant prior knowledge,
meaningful material (often selected by the teacher) and learner choice (to use meaningful learning
instead of rote learning). An important advantage of meaningful learning is that it can be applied in a
wide variety of new problems or contexts. This power of transferability is necessary for creative
thinking.

Metalinguistic awareness
Metalinguistic Awareness is an ability to objectify language as a process and as a thing.
The concept of Metalinguistic Awareness is helpful in explaining the execution and transfer of
linguistic knowledge across languages (e.g. code switching and translation among bilinguals.)
Meta-linguistics is the ability to consciously reflect on the nature of language as it rests on the
following skills: 1.) knowing that language has a potential greater than that of simple symbols (it goes
beyond the meaning) 2.) knowing that words are separable from their referents (meaning resides in
the mind, not in the name ie. Sonia is Sonia, and I will be the same person even if somebody calls me
another name) 3.) knowing that language has a structure that can be manipulated (language is
malleable, you can change and write things in many different ways; for example, if something is
written not grammatically correct, you can change it). Metalinguistic Awareness is also known as
metalinguistic ability which is closer to Metacognition ("knowing about knowing")
Meta-linguistic awareness is the ability to reflect on the use of language. As metalinguistic awareness
grows, children begin to recognize that statements may have a literal meaning and an implied
meaning. They begin to make more frequent and sophisticated use of metaphors such as the simile,
"We packed the room like sardines."

Page | 99

Between the ages of 6 and 8 most children begin to expand upon their metalinguistic awareness and
start to recognize irony and sarcasm. These concepts require the child to understand the subtleties of
an utterance's social and cultural context.

Microteaching
Microteaching is a training technique whereby the teacher reviews a videotape of the lesson after
each session, in order to conduct a "post-mortem". Teachers find out what has worked, which
aspects have fallen short, and what needs to be done to enhance their teaching technique. Invented
in the mid-1960's at Stanford University by Dr. Dwight Allen, micro-teaching has been used with
success for several decades now, as a way to help teachers acquire new skills.
In the original process, a teacher was asked to prepare a short lesson (usually 20 minutes) for a small
group of learners who may not have been her own students. This was videotaped, using VHS. After
the lesson, the teacher, teaching colleagues, a master teacher and the students together viewed the
videotape and commented on what they saw happening, referencing the teacher's learning
objectives. Seeing the video and getting comments from colleagues and students provided teachers
with an often intense "under the microscope" view of their teaching.
Micro lessons are great opportunities to present sample "snapshots" of what/how you teach and to
get some feedback from colleagues about how it was received. It's a chance to try teaching strategies
that the teacher may not use regularly. It's a good, safe time to experiment with something new and
get feedback on technique.
Since its inception in 1963, microteaching has become an established teacher-training procedure in
many universities and school districts. This training procedure is geared towards simplification of the
complexities of the regular teaching-learning process. Class size, time, task, and content is scaled
down to provide optimal training environments. The supervisor demonstrates the skill to be
practiced. This may be live demonstration, or a video presentation of the skill. Then, the group
members select a topic and prepare a lesson of five to ten minutes. The teacher trainee then has the
opportunity to practice and evaluate her use of the skills. Practice takes the form of a ten-minute
micro-teaching session in which five to ten pupils are involved.
Feedback in microteaching is critical for teacher-trainee improvement. It is the information that a
student receives concerning his attempts to imitate certain patterns of teaching. The built-in
feedback mechanism in micro-teaching acquaints the trainee with the success of his performance
and enables him to evaluate and to improve his teaching behavior. Electronic media gadgets that can
be used to facilitate effective feedback is a vital aspect of micro-teaching.(Teg, 2007).

Milestones in Language Development


1. By age one
Recognizes name
Says 2-3 words besides "mama" and "dada"
Imitates familiar words
Understands simple instructions
Recognizes words as symbols for objects: Car - points to garage, cat meows
Activities to encourage your child's language
Respond to your child's coos, gurgles, and babbling
Talk to your child as you care for him or her throughout the day
Read colorful books to your child every day
Tell nursery rhymes and sing songs
Teach your child the names of everyday items and familiar people
Page | 100

Take your child with you to new places and situations


2. Between 1-2 years
Understands "no"
Uses 10 to 20 words, including names
Combines two words such as "baba bye-bye"
Waves good-bye
Makes the "sounds" of familiar animals
Gives a toy when asked
Uses words such as "more" to make wants known
Points to his or her toes, eyes, and nose
Brings object from another room when asked
Activities to encourage your child's language
Reward and encourage early efforts at saying new words
Talk to your baby about everything you're doing while you're with him
Talk simply, clearly, and slowly to your child
Talk about new situations before you go, while you're there, and again when you are home
Look at your child when he or she talks to you
Describe what your child is doing, feeling, hearing
Let your child listen to children's records and tapes
Praise your child's efforts to communicate
3. Between 2-3 years
Identifies body parts
Carries on 'conversation' with self and dolls
Asks "what's that?" And "where's my?"
Uses 2-word negative phrases such as "no want".
Forms some plurals by adding "s"; book, books
Has a 450 word vocabulary
Gives first name, holds up fingers to tell age
Combines nouns and verbs "mommy go"
Understands simple time concepts: "last night", "tomorrow"
Refers to self as "me" rather than by name
Tries to get adult attention: "watch me"
Likes to hear same story repeated
May say "no" when means "yes"
Talks to other children as well as adults
Solves problems by talking instead of hitting or crying
Answers "where" questions
Names common pictures and things
Uses short sentences
Matches 3-4 colors, knows big and little
Activities to encourage your child's language
Repeat new words over and over
Help your child listen and follow instructions by playing games: "pick up the ball, " "Touch baba's
nose"
Take your child on trips and talk about what you see before, during and after the trip
Let your child tell you answers to simple questions
Read books every day, perhaps as part of the bedtime routine
Listen attentively as your child talks to you
Describe what you are doing, planning, thinking
Have the child deliver simple messages for you
Page | 101

Carry on conversations with the child, preferably when the two of you have some quiet time
together
Ask questions to get your child to think and talk
Show the child you understand what he or she says by answering, smiling, and nodding your head
Expand what the; child says. If he or she says, "more juice", You say, "mama wants more juice."
4. Between 3-4 years
Can tell a story
Has a sentence length of 4-5 words
Has a vocabulary of nearly 1000 words
Names at least one color
Understands "yesterday," "summer", "lunchtime", "tonight", "little-big"
Begins to obey requests like "put the block under the chair"
Knows his or her last name, name of street on which he/she lives and several nursery rhymes
Activities to encourage your child's language
Talk about how objects are the same or different
Help your child to tell stories using books and pictures
Let your child play with other children
Read longer stories to your child
Pay attention to your child when he's talking
Talk about places you've been or will be going
5. Between 4-5 years
Has sentence length of 4-5 words
Uses past tense correctly
Has a vocabulary of nearly 1500 words
Points to colors red, blue, yellow and green
Identifies triangles, circles and squares
Understands "In the morning", "next", "noontime"
Can speak of imaginary conditions such as "I hope"
Asks many questions, asks "who?" And "why?"
Activities to encourage your child's language
Help your child sort objects and things (ex. things you eat, animals)
Teach your child how to use the telephone
Let your child help you plan activities such as what you will make for Thanksgiving dinner
Continue talking with him about his interest
Read longer stories to him
Let her tell and make up stories for you
Show your pleasure when she comes to talk with you

6. Between 5-6 years


Has a sentence length of 5-6 words
Has a vocabulary of around 2000 words
Defines objects by their use (you eat with a fork) and can tell what objects are made of
Knows spatial relations like "on top", "behind", "far" and "near"
Knows her address
Identifies a penny, nickel and dime
Knows common opposites like "big/little"
Understands "same" and "different"
Counts ten objects
Asks questions for information
Page | 102

Distinguished left and right hand in herself


Uses all types of sentences, for example "let's go to the store after we eat"
Activities to encourage your child's language
Praise your child when she talks about her feelings, thoughts, hopes and fears
Comment on what you did or how you think your child feels
Sing songs, rhymes with your child
Continue to read longer stories
Talk with him as you would an adult
Look at family photos and talk to him about your family history
Listen when your child talks to you.
The child should be evaluated regularly (3 - month intervals) by speech therapist and psychologist to
chart his development.
Mirror Neurons
It was discovered in the 1990s that the same brain neurons fired both when a person performs an
action and when that person observes someone else performing the action. Most suggestively for
linguistics, these neurons are located in Brocas area, a region associated with language. The
phenomenon of mirror neurons has lent support to the gestural hypothesis of language evolution,
which proposes that language developed as a communication system out of physical imitation in a
community.
Furthermore, the mirror neuron effect occurs more strongly when the observed action is tending
towards an instrumental goal rather than an apparently unmotivated action. For example, more
neurons fire when the action involves picking up an apple and eating it than when simply picking up
an apple and moving it. By extension, this seems to suggest that language evolved primarily as a goal
driven mechanism (which would give support to schema theory), and communication towards a
purpose was a primary function of language origin. As with most theories of language evolution,
conclusive evidence is hard to come by.

Monitor hypothesis
The monitor hypothesis asserts that a learner's learned system acts as a monitor to what they are
producing. In other words, while only the acquired system is able to produce spontaneous speech,
the learned system is used to check what is being spoken.
Before the learner produces an utterance, he or she internally scans it for errors, and uses the
learned system to make corrections. Self-correction occurs when the learner uses the Monitor to
correct a sentence after it is uttered. According to the hypothesis, such self-monitoring and selfcorrection are the only functions of conscious language learning.
The Monitor model then predicts faster initial progress by adults than children, as adults use this
monitor when producing L2 utterances before having acquired the ability for natural performance,
and adult learners will input more into conversations earlier than children.
Three conditions for use of the monitor
According to Krashen, for the Monitor to be successfully used, three conditions must be met:
1. The acquirer/learner must know the rule
This is a very difficult condition to meet because it means that the speaker must have had explicit
instruction on the language form that he or she is trying to produce.
Page | 103

2. The acquirer must be focused on correctness


He or she must be thinking about form, and it is difficult to focus on meaning and form at the
same time.
3. The acquirer/learner must have time to use the monitor
Using the monitor requires the speaker to slow down and focus on form.
Difficulties using the monitor
There are many difficulties with the use of the monitor, making the monitor rather weak as a
language tool.

Having time to use the monitor: there is a price that is paid for the use of the monitor- the speaker
is then focused on form rather than meaning, resulting in the production and exchange of less
information, thus slowing the flow of conversation. Some speakers over-monitor to the point that
the conversation is painfully slow and sometimes difficult to listen to.
Knowing the rule: this is a difficult condition to meet, because even the best students do not learn
every rule that is taught, cannot remember every rule they have learned, and can't always correctly
apply the rules they do remember. Furthermore, every rule of a language is not always included in a
text nor taught by the teacher
The rules of language make up only a small portion of our language competence: Acquisition does
not provide 100% language competence. There is often a small portion of grammar, punctuation, and
spelling that even the most proficient native speakers may not acquire. While it is important to learn
these aspects of language, since writing is the only form that requires 100% competence, these
aspects of language make up only a small portion of our language competence.
Due to these difficulties, Krashen recommends using the monitor at times when it does not interfere
with communication, such as while writing.
Criticism
The model has been criticized by some linguists and isn't considered a valid hypothesis for some. It
has however, inspired much research, and many linguists praise its value.
The theory underlies Krashen and Terrell's comprehension-based language learning methodology
known as the natural approach (1983). The Focal Skills approach, first developed in 1988, is also
based on the theory.

Motivation
Motivation is the activation or energization of goal-oriented behavior. Motivation may be intrinsic or
extrinsic. The term is generally used for humans but, theoretically, it can also be used to describe the
causes for animal behavior as well. This article refers to human motivation. According to various
theories, motivation may be rooted in the basic need to minimize physical pain and maximize
pleasure, or it may include specific needs such as eating and resting, or a desired object, hobby, goal,
state of being, ideal, or it may be attributed to less-apparent reasons such as altruism, morality, or
avoiding mortality.
Intrinsic motivation comes from rewards inherent to a task or activity itself - the enjoyment of a
puzzle or the love of playing. This form of motivation has been studied by social and educational
psychologists since the early 1970s. Research has found that it is usually associated with high
educational achievement and enjoyment by students. Intrinsic motivation has been explained by

Page | 104

Fritz Heider's attribution theory, Bandura's work on self-efficacy, and Ryan and Deci's cognitive
evaluation theory. Students are likely to be intrinsically motivated if they:
attribute their educational results to internal factors that they can control (e.g. the amount of
effort they put in),
believe they can be effective agents in reaching desired goals (i.e. the results are not determined by
luck),
are interested in mastering a topic, rather than just rote-learning to achieve good grades.
Extrinsic motivation
Extrinsic motivation comes from outside of the performer. Money is the most obvious example, but
coercion and threat of punishment are also common extrinsic motivations.
In sports, the crowd may cheer on the performer, which may motivate him or her to do well.
Trophies are also extrinsic incentives. Competition is in general extrinsic because it encourages the
performer to win and beat others, not to enjoy the intrinsic rewards of the activity.
Social psychological research has indicated that extrinsic rewards can lead to over justification and a
subsequent reduction in intrinsic motivation. In one study demonstrating this effect, children who
expected to be (and were) rewarded with a ribbon and a gold star for drawing pictures spent less
time playing with the drawing materials in subsequent observations than children who were assigned
to an unexpected reward condition and to children who received no extrinsic reward
Self-control
The self-control of motivation is increasingly understood as a subset of emotional intelligence; a
person may be highly intelligent according to a more conservative definition (as measured by many
intelligence tests), yet unmotivated to dedicate this intelligence to certain tasks. Yale School of
Management professor Victor Vroom's "expectancy theory" provides an account of when people will
decide whether to exert self control to pursue a particular goal.
Drives and desires can be described as a deficiency or need that activates behaviour that is aimed at
a goal or an incentive. These are thought to originate within the individual and may not require
external stimuli to encourage the behaviour. Basic drives could be sparked by deficiencies such as
hunger, which motivates a person to seek food; whereas more subtle drives might be the desire for
praise and approval, which motivates a person to behave in a manner pleasing to others.
By contrast, the role of extrinsic rewards and stimuli can be seen in the example of training animals
by giving them treats when they perform a trick correctly. The treat motivates the animals to
perform the trick consistently, even later when the treat is removed from the process.
Motivational theories
The incentive theory of motivation
A reward, tangible or intangible, is presented after the occurrence of an action (i.e. behavior) with
the intent to cause the behavior to occur again. This is done by associating positive meaning to the
behavior. Studies show that if the person receives the reward immediately, the effect would be
greater, and decreases as duration lengthens. Repetitive action-reward combination can cause the
action to become habit. Motivation comes from two sources: oneself, and other people. These two
sources are called intrinsic motivation and extrinsic motivation, respectively.
Applying proper motivational techniques can be much harder than it seems. Steven Kerr notes that
when creating a reward system, it can be easy to reward A, while hoping for B, and in the process,
reap harmful effects that can jeopardize your goals.

Page | 105

A reinforcer is different from reward, in that reinforcement is intended to create a measured


increase in the rate of a desirable behavior following the addition of something to the environment.
Drive-reduction theories
Main article: Drive theory
There are a number of drive theories. The Drive Reduction Theory grows out of the concept that we
have certain biological drives, such as hunger. As time passes the strength of the drive increases if it
is not satisfied (in this case by eating). Upon satisfying a drive the drive's strength is reduced. The
theory is based on diverse ideas from the theories of Freud to the ideas of feedback control systems,
such as a thermostat.
Drive theory has some intuitive or folk validity. For instance when preparing food, the drive model
appears to be compatible with sensations of rising hunger as the food is prepared, and, after the
food has been consumed, a decrease in subjective hunger. There are several problems, however,
that leave the validity of drive reduction open for debate. The first problem is that it does not explain
how secondary reinforcers reduce drive. For example, money satisfies no biological or psychological
needs, but a pay check appears to reduce drive through second-order conditioning. Secondly, a drive,
such as hunger, is viewed as having a "desire" to eat, making the drive a homuncular being - a feature
criticized as simply moving the fundamental problem behind this "small man" and his desires.
In addition, it is clear that drive reduction theory cannot be a complete theory of behavior, or a
hungry human could not prepare a meal without eating the food before they finished cooking it. The
ability of drive theory to cope with all kinds of behavior, from not satisfying a drive (by adding on
other traits such as restraint), or adding additional drives for "tasty" food, which combine with drives
for "food" in order to explain cooking render it hard to test.
Cognitive dissonance theory
Main article: Cognitive dissonance
Suggested by Leon Festinger, this occurs when an individual experiences some degree of discomfort
resulting from an incompatibility between two cognitions. For example, a consumer may seek to
reassure himself regarding a purchase, feeling, in retrospect, that another decision may have been
preferable.
Another example of cognitive dissonance is when a belief and a behavior are in conflict. A person
may wish to be healthy, believes smoking is bad for one's health, and yet continues to smoke.
Need theories
Need hierarchy theory
Abraham Maslow's theory is one of the most widely discussed theories of motivation.
The theory can be summarized as follows:
Human beings have wants and desires which influence their behavior. Only unsatisfied needs
influence behavior, satisfied needs do not.
Since needs are many, they are arranged in order of importance, from the basic to the complex.
The person advances to the next level of needs only after the lower level need is at least minimally
satisfied.
The further the progress up the hierarchy, the more individuality, humanness and psychological
health a person will show.
The needs, listed from basic (lowest-earliest) to most complex (highest-latest) are as follows:
Physiology
Page | 106

Safety
Belongingness
Self-esteem
Self actualization
Herzbergs two-factor theory
Frederick Herzberg's two-factor theory, aka intrinsic/extrinsic motivation, concludes that certain
factors in the workplace result in job satisfaction, but if absent, lead to dissatisfaction.
The factors that motivate people can change over their lifetime, but "respect for me as a person" is
one of the top motivating factors at any stage of life.
He distinguished between:
Motivators; (e.g. challenging work, recognition, responsibility) which give positive satisfaction, and
Hygiene factors; (e.g. status, job security, salary and fringe benefits) that do not motivate if
present, but, if absent, result in demotivation.
The name Hygiene factors is used because, like hygiene, the presence will not make you healthier,
but absence can cause health deterioration.
The theory is sometimes called the "Motivator-Hygiene Theory" and/or "The Dual Structure Theory."
Herzberg's theory has found application in such occupational fields as information systems and in
studies of user satisfaction (see Computer user satisfaction).
Alderfers ERG theory
Clayton Alderfer, expanding on Maslow's hierarchy of needs, created the ERG theory (existence,
relatedness and growth). Physiological and safety, the lower order needs, are placed in the existence
category, while love and self esteem needs are placed in the relatedness category. The growth
category contains our self-actualization and self-esteem needs.
Self-determination theory
Self-determination theory, developed by Edward Deci and Richard Ryan, focuses on the importance
of intrinsic motivation in driving human behavior. Like Maslow's hierarchical theory and others that
built on it, SDT posits a natural tendency toward growth and development. Unlike these other
theories, however, SDT does not include any sort of "autopilot" for achievement, but instead
requires active encouragement from the environment. The primary factors that encourage
motivation and development are autonomy, competence feedback, and relatedness.
Broad theories
The latest approach in Achievement Motivation is an integrative perspective as lined out in the
"Onion-Ring-Model of Achievement Motivation" by Heinz Schuler, George C. Thornton III, Andreas
Frintrup and Rose Mueller-Hanson. It is based on the premise that performance motivation results
from way broad components of personality are directed towards performance. As a result, it includes
a range of dimensions that are relevant to success at work but which are not conventionally regarded
as being part of performance motivation. Especially it integrates formerly separated approaches as
Need for Achievement with e.g. social motives like Dominance. The Achievement Motivation
Inventory (AMI) (Schuler, Thornton, Frintrup & Mueller-Hanson, 2003) is based on this theory and
assesses three factors (17 separated scales) relevant to vocational and professional success.
Cognitive theories
Goal theory
Page | 107

Goal theory is based on the notion that individuals sometimes have a drive to reach a clearly defined
end state. Often, this end state is a reward in itself. A goal's efficiency is affected by three features:
proximity, difficulty and specificity. An ideal goal should present a situation where the time between
the initiation of behavior and the end state is close. This explains why some children are more
motivated to learn how to ride a bike than mastering algebra. A goal should be moderate, not too
hard or too easy to complete. In both cases, most people are not optimally motivated, as many want
a challenge (which assumes some kind of insecurity of success). At the same time people want to feel
that there is a substantial probability that they will succeed. Specificity concerns the description of
the goal in their class. The goal should be objectively defined and intelligible for the individual. A
classic example of a poorly specified goal is to get the highest possible grade. Most children have no
idea how much effort they need to reach that goal.

Multiple intelligences
The theory of multiple intelligences was proposed by Howard Gardner in 1983 to more accurately
define the concept of intelligence and to address the question whether methods which claim to
measure intelligence (or aspects thereof) are truly scientific.
Gardner's theory argues that intelligence, particularly as it is traditionally defined, does not
sufficiently encompass the wide variety of abilities humans display. In his conception, a child who
masters multiplication easily is not necessarily more intelligent overall than a child who struggles to
do so. The second child may be stronger in another kind of intelligence and therefore 1) may best
learn the given material through a different approach, 2) may excel in a field outside of mathematics,
or 3) may even be looking at the multiplication process at a fundamentally deeper level, which can
result in a seeming slowness that hides a mathematical intelligence that is potentially higher than
that of a child who easily memorizes the multiplication table.
Gardner's categories of multiple intelligences is a theoretical axiom whose structure is a derivative of
Kant's metaphysical theory of personality. Gardner's perspective of human intelligence attempts to
deconstruct Kant's a priori categorical structure. Whereas Kant describes human personality as a
homeostatic dialogism that extends Aristotelian "idea", Gardner suggests that personality is not
static at all; rather it is in a constant state of flux. While personality cannot be pinned down and truly
defined, he argues that it circumnavigates around a central impossibility. Therefore, positive human
traits often embody the characteristics of their opposites. Human intelligence is both a reflection of
and a collapse of its own categories. This being the case, Gardner tries to reconstruct the
disintegration of Kant's faulty construction. Gardner's heteroglossia entails 7 distinct non-Kantian
groups. The categories of intelligence proposed by Gardner (1983) are the following:
Bodily-kinesthetic
This area has to do with bodily movement and physiology. In theory, people who have bodilykinesthetic intelligence should learn better by involving muscular movement (eg. getting up and
moving around into the learning experience), and are generally good at physical activities such as
sports or dance. They may enjoy acting or performing, and in general they are good at building and
making things. They often learn best by doing something physically, rather than reading or hearing
about it. Those with strong bodily-kinesthetic intelligence seem to use what might be termed muscle
memory - they remember things through their body such as verbal memory or images.
Careers that suit those with this intelligence include: athletes, dancers, musicians, actors, surgeons,
doctors, builders, police officers, and soldiers. Although these careers can be duplicated through
virtual simulation, they will not produce the actual physical learning that is needed in this
intelligence.
Interpersonal
Page | 108

This area has to do with interaction with others. In theory, people who have a high interpersonal
intelligence tend to be extroverts, characterized by their sensitivity to others' moods, feelings,
temperaments and motivations, and their ability to cooperate in order to work as part of a group.
They communicate effectively and empathize easily with others, and may be either leaders buwisit
ka or followers. They typically learn best by working with others and often enjoy discussion and
debate.
Careers that suit those with this intelligence include sales, politicians, managers, teachers, and social
workers.
Verbal-linguistic
This area has to do with words, spoken or written. People with high verbal-linguistic intelligence
display a facility with words and languages. They are typically good at reading, writing, telling stories
and memorizing words along with dates. They tend to learn best by reading, taking notes, listening to
lectures, and discussion and debate. They are also frequently skilled at explaining, teaching and
oration or persuasive speaking. Those with verbal-linguistic intelligence learn foreign languages very
easily as they have high verbal memory and recall, and an ability to understand and manipulate
syntax and structure.
Careers that suit those with this intelligence include writers, lawyers, philosophers, journalists,
politicians, poets, and teachers.
Logical-mathematical
This area has to do with logic, abstractions, reasoning, and numbers. While it is often assumed that
those with this intelligence naturally excel in mathematics, chess, computer programming and other
logical or numerical activities, a more accurate definition places emphasis on traditional
mathematical ability and more reasoning capabilities, abstract patterns of recognition, scientific
thinking and investigation, and the ability to perform complex calculations. It correlates strongly with
traditional concepts of "intelligence" or IQ.
Careers which suit those with this intelligence include scientists, mathematicians, engineers, doctors
and economists.
Intrapersonal
This area has to do with introspective and self-reflective capacities. People with intrapersonal
intelligence are intuitive and typically introverted. They are skillful at deciphering their own feelings
and motivations. This refers to having a deep understanding of the self; what are your strengths/
weaknesses, what makes you unique, can you predict your own reactions/ emotions.
Careers which suit those with this intelligence include philosophers, psychologists, theologians and
writers.
Visual-spatial
This area has to do with vision and spatial judgment. People with strong visual-spatial intelligence are
typically very good at visualizing and mentally manipulating objects. Those with strong spatial
intelligence are often proficient at solving puzzles. They have a strong visual memory and are often
artistically inclined. Those with visual-spatial intelligence also generally have a very good sense of
direction and may also have very good hand-eye coordination, although this is normally seen as a
characteristic of the bodily-kinesthetic intelligence.
There appears to be a high correlation between spatial and mathematical abilities, which seems to
indicate that these two intelligences are not independent. Since solving a mathematical problem
involves manipulating symbols and numbers, spatial intelligence is involved.
Careers that suit those with this intelligence include artists, engineers, and architects.
Page | 109

Musical
This area has to do with rhythm, music, and hearing. Those who have a high level of musicalrhythmic intelligence display greater sensitivity to sounds, rhythms, tones, and music. They normally
have good pitch and may even have absolute pitch, and are able to sing, play musical instruments,
and compose music. Since there is a strong auditory component to this intelligence, those who are
strongest in it may learn best via lecture. In addition, they will often use songs or rhythms to learn
and memorize information, and may work best with music playing in the background.
Careers that suit those with this intelligence include instrumentalists, singers, conductors, discjockeys, orators, writers (to a certain extent) and composers.
Naturalistic
This area has to do with nature, nurturing and relating information to one's natural surroundings.
This type of intelligence was not part of Gardner's original theory of Multiple Intelligences, but was
added to the theory in 1997. Those with it are said to have greater sensitivity to nature and their
place within it, the ability to nurture and grow things, and greater ease in caring for, taming and
interacting with animals. They may also be able to discern changes in weather or similar fluctuations
in their natural surroundings. They are also good at recognizing and classifying different species. They
must connect a new experience with prior knowledge to truly learn something new.
"Naturalists" learn best when the subject involves collecting and analyzing, or is closely related to
something prominent in nature; they also don't enjoy learning unfamiliar or seemingly useless
subjects with little or no connections to nature. It is advised that naturalistic learners would learn
more through being outside or in a kinesthetic way.
The theory behind this intelligence is often criticized, much like the spiritual or existential intelligence
(see below), as it is seen by many as not indicative of an intelligence but rather an interest. However,
it remains an indispensable intelligence for humans who live almost entirely from nature such as
some native populations.
Careers which suit those with this intelligence include scientists, naturalists, conservationists,
gardeners, and farmers.
Other intelligences
Other intelligences have been suggested or explored by Gardner and his colleagues, including
spiritual, existential and moral intelligence. Gardner excluded spiritual intelligence due to what he
perceived as the inability to codify criteria comparable to the other "intelligences". Existential
intelligence (the capacity to raise and reflect on philosophical questions about life, death, and
ultimate realities) meets most of the criteria with the exception of identifiable areas of the brain that
specialize for this faculty.[6] Moral capacities were excluded because they are normative rather than
descriptive.[7] Moreover, Gardner is considering the idea of a digital intelligence, the innate ability to
learn through computers, television, etc. Still, he believes this type of intelligence can still be
categorized under logical-mathematical, which has to do with technology to some extent.

Page | 110

Neurolinguistics
The study of the connections between language and brain. The study of the relation between
language and brain was begun in the midnineteenth century by the Frenchman Paul Broca and the
German Carl Wernicke. What they did was to study and characterize the aphasia (disturbed
language) of people who had suffered brain damage, and then, after the sufferers deaths, to
conduct post-mortem examinations in order to find out which areas of the brain had been
damaged.
In this way, they succeeded in identifying two specific areas of the brain, today called Brocas area
and Wernickes area, each of which is responsible for specific aspects of language use. These findings
confirmed the reality of the localization of language in the brain; moreover, since these areas are
nearly always located on the left side of the brain, they also confirmed the lateralization of the
brain.
In the mid-twentieth century, the American neurologist Norman Geschwind elaborated the view of
the brain as consisting of a number of specialized components with connections between them, and
he also provided the basis of our modern classification of the several language areas in the brain and
of the types of aphasia resulting from damage to each.
More recently, the introduction of sophisticated brain scanners has allowed specialists to examine
the activity in the brains of healthy, conscious subjects who are performing specific linguistic tasks
like reading, speaking and listening. The new data have both confirmed and extended our
understanding of the location and functions of the several language areas.

Page | 111

Origin and Evolution of Language


The series of steps by which human language came into existence. Very little is known about how
human language came into existence, though there is no shortage of speculation by specialists in a
dozen different disciplines.
The members of most non-human species have some way of communicating with their fellows, and
mammals in particular typically use a combination of calls (vocal signals) with postures, gestures and
expressions. Proponents of the continuity hypothesis see language as deriving directly from such
systems by simple elaboration. But most linguists and many others see these signals as more akin to
such non-linguistic activities as sobbing, laughing and screaming, and they prefer to invoke
discontinuity hypotheses, by which language has an entirely different origin, one not detectable in
living non-human species.
Specialists differ in the date they assign to the rise of language. The most popular view sees language
as arising with our own species, Homo sapiens, a little over 100,000 years ago. But some
anthropologists believe they can detect evidence for language areas in the brains of our hominid
ancestors of 1 million or 2 million years ago, while, on the other hand, some archaeologists argue
that full-blown language can only have arisen around 40,00050,000 years ago, a time when they see
evidence for a spectacular flowering of art, culture and material goods.
There is no question that language has given humans a huge evolutionary advantage. Much of the
physical evidence is in the form of feedback circularity: standing up on two legs raised our sensory
organs for more input and freed our hands up; having fingers in front of our eyes allowed us to
ironically represent numbers (up to 10) which developed our memories and abstract reasoning and
grew our brains; increasingly subtle dexterity promoted brain growth; balancing a head on shoulders
allowed the head to be bigger; the head grew in size because the jaw, forehead and vocal tract
enlarged; having children with large heads while two-legged mothers had smaller pelvises meant
that babies had to be born before their full term development was complete; so babies are helpless
in comparison with other mammals, and language gives us a means of passing on childcare
knowledge and increases the individuals chances of surviving infanthood; and so on.
While specialists in some other disciplines often like to portray language as something not very
different from what vervet monkeys do, and hence as requiring a minimum of explanation, almost all
linguists are satisfied that human language is in fact dramatically, utterly different from everything
else we can see, that language, more than anything else, is what makes us human, and that the origin
of language is therefore a major problem which we are not close to solving. Most linguists further
believe that our language faculty is genetic in nature, that our remote ancestors simply evolved it,
and hence that we are born to use language in the way that birds are born to fly.

Page | 112

Phonotactics
The rules for combining phonemes into words in a language. Every variety of every language
possesses a larger or smaller set of phonemes, and every legitimate word in that language must
consist of a permitted sequence of those phonemes. But the key word here is permitted: no
language allows its phonemes to occur in just any sequence at all. Instead, each language imposes
strict limits on the sequences of phonemes which are allowed to occur in a word, and those
restrictions are its phonotactics.
For example, English allows a word to begin with /b/ (as in bed), with /r/ (as in red), with /l/ (as in
led), with /n/ (as in net), with the cluster /br/ (as in bread), and with the cluster /bl/ (as in bled). But
it does not permit a word to begin with the cluster /bn/: no such word as bned is even conceivable in
English (the asterisk indicates this).
Moreover, if a word begins with /br/ or /bl/, then the next phoneme must be a vowel: nothing like
blsed or brved is possible either (but such a combination may be possible in another language).
Phonotactic constraints may differ widely from one language to another, even when the sets of
phonemes are somewhat similar. For example, Hawaiian allows no consonant clusters at all, and
every syllable must end in a vowel, and so kanaka man is a legal word, but something like kanak or
kanka is not. Japanese has a similar rule, so when words are borrowed from English that would be
illegitimate phonotactically, they are nipponized: biru beer, sunugurasu sunglasses, gurufurendu
girlfriend. The Caucasian language Georgian permits astounding consonant clusters, as in mtsvrtneli
trainer and vprtskvni Im peeling it. The Canadian language Bella Coola, unusually, permits words
containing no vowels at all, like _kwtvw make it big!

Phrase-Structure Grammar
A type of generative grammar which represents constituent structure directly. We normally regard
the structure of any sentence as an instance of constituent structure, in which smaller syntactic units
are combined into larger units, which are then combined into still larger units, and so on. This kind of
structure can be readily handled by a phrase-structure grammar (PSG) (the full form of the name is
context-free phrase-structure grammar, or CF-PSG). The idea of a PSG is simple. We first note what
syntactic categories appear to exist in a given language, and what different internal structures each
of these can have. Then, for each such structure, we write a rule that displays that structure. So, for
example, an English sentence typically consists of a noun phrase followed by a verb phrase (as in My
sister bought a car), and we therefore write a phrase-structure rule as follows: S ! NP VP. This says
that a sentence may consist of a noun phrase followed by a verb phrase.
Further, an English NP may consist of a determiner (like the or my) followed by an N-bar (like little
girl or box of chocolates), and so we write another rule: NP ! Det N0. We continue in this way until
we have a rule for every structure in the language.
Now the set of rules can be used to generate sentences. Starting with S (for sentence), we apply
some suitable rule to tell us what units the sentence consists of, and then to each of those units we
apply a further rule to tell us what units it consists of, and so on, until we reach the level of individual
words, at which point we simply insert words belonging to the appropriate parts of speech. The
result is usually displayed graphically in a tree. PSGs are the most appropriate type of grammar for
teaching elementary syntax, and moreover they are powerful enough to describe successfully almost
every construction occurring in any language, though there exist one or two rare and unusual
constructions which they cannot handle.
PSGs were introduced by the American linguist Noam Chomsky in the 1950s, but Chomsky had little
interest in them, and they were not really explored seriously until the British linguist Gerald Gazdar
began developing a sophisticated version around 1980; his version was dubbed Generalized PhrasePage | 113

Structure Grammar (GPSG). More recently, the Americans Carl Pollard and Ivan Sag have constructed
a very different-looking version called Head- Driven Phrase-Structure Grammar (HPSG), which is both
linguistically interesting and convenient for computational purposes. HPSG bases its formal rules on a
richly organized and structured lexicon.
A lexical item (a word) is made of two features: its sound and its syntactic-semantic information,
which are in turn divided into further sub-features. An example would be the verb set in which
almost always has a semantics of negativity and adversity (bad weather set in, financial problems set
in) and cannot be used with human agents. To describe the grammar of the verb fully we need to
have a lexico-semantic information module built in to the description. HPSG is an early example of
the fact that grammar and lexis are interdependent.

Pidgin
Pidgin a language which develops as a contact language when groups of people who speak
different languages try to communicate with one another on a regular basis. For example, this has
occurred many times in the past when foreign traders had to communicate with the local population
or groups of workers from different language backgrounds on plantations or in factories. A pidgin
usually has a limited vocabulary and a reduced grammatical structure which may expand when a
pidgin is used over a long period and for many purposes. For example, Tok Pisin (New Guinea Pidgin):
yu ken kisim long olgeta bik pela stua
you can get(it) at all big (noun marker) stores
Usually pidgins have no native speakers but there are expanded pidgins, e.g. Tok Pisin in Papua New
Guinea and Nigerian Pidgin English in West Africa, which are spoken by some people in their
ommunity as first or PRIMARY LANGUAGE. Often expanded pidgins will develop into CREOLE
languages.

Creole a PIDGIN language which has become the native language of a group of speakers, being used
for all or many of their daily communicative needs. Usually, the sentence structures and vocabulary
range of a creole are far more complex than those of a pidgin language. Creoles are usually classified
according to the language from which most of their vocabulary comes, e.g. English-based, Frenchbased, Portuguese-based, and Swahili-based creoles.

Place of Articulation
A label for the speech organs most directly involved in producing a consonant. By definition, the
production of a consonant involves a constriction (narrowing or closure) somewhere in the vocal
tract between the glottis and the lips. We have a standard terminology for labelling the particular
parts of the vocal tract directly involved in making that constriction, and each such label denotes a
particular place of articulation.
Toward the bottom end of the vocal tract, we can safely use simple labels like glottal and pharyngeal.
Farther up, we need in principle to identify both the lower articulator and the upper one (in that
order), and for this purpose we use a compound label with the first identifier ending in -o and the
second in -al (or -ar).
Examples:
_ dorso-velar back (dorsum) of tongue plus velum; e.g., [k]
_ lamino-alveolar blade (lamina) of tongue plus alveolar ridge;
Page | 114

e.g., English [n] for most speakers


_ apico-dental tip (apex) of tongue plus upper teeth; e.g., French [t]
_ sublamino-prepalatal underside of tongue plus front of palate;
e.g., [] in many Australian languages.
If the lower articulator is obvious or unimportant, we can omit it from the label; hence we can say
velar instead of dorso-velar, or alveolar for any consonant involving the alveolar ridge.
A few traditional terms are unsystematic, such as retroflex for any consonant in which the tip of the
tongue is curled up, palatoalveolar for a consonant involving a long constriction from the alveolar
ridge to the palate, and bilabial in place of the expected labio-labial. Note also coronal for any
consonant during which the blade of the tongue is raised, whether or not the blade is involved in the
articulation.
For a consonant involving two simultaneous constrictions, we use the -al ending twice, so that [w],
for example, is labial-velar (though the unsystematic labio-velar is also found).

Pragmatics
Pragmatics is a subfield of linguistics which studies the ways in which context contributes to
meaning. Pragmatics encompasses speech act theory, conversational implicature, talk in
interaction and other approaches to language behavior in philosophy, sociology, and linguistics. It
studies how the transmission of meaning depends not only on the linguistic knowledge (e.g.
grammar, lexicon etc.) of the speaker and listener, but also on the context of the utterance,
knowledge about the status of those involved, the inferred intent of the speaker, and so on. In this
respect, pragmatics explains how language users are able to overcome apparent ambiguity, since
meaning relies on the manner, place, time etc. of an utterance. The ability to understand another
speaker's intended meaning is called pragmatic competence. An utterance describing pragmatic
function is described as metapragmatic. Pragmatic awareness is regarded as one of the most
challenging aspects of language learning, and comes only through experience.
Structural ambiguity
The sentence "You have a green light" is ambiguous. Without knowing the context, the identity of
the speaker, and their intent, it is not possible to infer the meaning with confidence. For example:
It could mean you are holding a green light bulb.
Or that you have a green light to drive your car.
Or it could be indicating that you can go ahead with the project.
Or that your body has a green glow
Similarly, the sentence "Sherlock saw the man with binoculars" could mean that Sherlock observed
the man by using binoculars; or it could mean that Sherlock observed a man who was holding
binoculars. The meaning of the sentence depends on an understanding of the context and the
speaker's intent. As defined in linguistics, a sentence is an abstract entity a string of words
divorced from non-linguistic context as opposed to an utterance, which is a concrete example of a
speech act in a specific context. The cat sat on the mat is a sentence of English; if you say to your
sister on Tuesday afternoon: "The cat sat on the mat", this is an example of an utterance. Thus, there
is no such thing as a sentence with a single true meaning; it is underspecified (which cat sat on which
mat?) and potentially ambiguous. The meaning of an utterance, on the other hand, is inferred based
on linguistic knowledge and knowledge of the non-linguistic context of the utterance (which may or
may not be sufficient to resolve ambiguity).
Page | 115

Origins
Pragmatics was a reaction to structuralist linguistics as outlined by Ferdinand de Saussure. In many
cases, it expanded upon his idea that language has an analyzable structure, composed of parts that
can be defined in relation to others. Pragmatics first engaged only in synchronic study, as opposed
to examining the historical development of language. However, it rejected the notion that all
meaning comes from signs existing purely in the abstract space of langue. Meanwhile, historical
pragmatics has also come into being.
While Chomskyan linguistics famously repudiated Bloomfieldian anthropological linguistics,
pragmatics continues its tradition. Also influential were Franz Boas, Edward Sapir and Benjamin
Whorf.
Areas of interest
The study of the speaker's meaning, not focusing on the phonetic or grammatical form of an
utterance, but instead on what the speaker's intentions and beliefs are.
The study of the meaning in context, and the influence that a given context can have on the message.
It requires knowledge of the speaker's identities, and the place and time of the utterance.
The study of implicatures, i.e. the things that are communicated even though they are not explicitly
expressed.
The study of relative distance, both social and physical, between speakers in order to understand
what determines the choice of what is said and what is not said.
The study of what is not meant, as opposed to the intended meaning, i.e. that which is unsaid and
unintended, or unintentional.
Referential uses of language
When we speak of the referential uses of language we are talking about how we use signs to refer to
certain items. Below is an explanation of, first, what a sign is, second, how meanings are
accomplished through its usage.
A sign is the link or relationship between a signified and the signifier as defined by Saussure and
Huguenin. The signified is some entity or concept in the world. The signifier represents the signified.
An example would be:
Signified: the concept cat
Signifier: the word 'cat'"
The relationship between the two gives the sign meaning. This relationship can be further explained
by considering what we mean by "meaning." In pragmatics, there are two different types of meaning
to consider: semantico-referential meaning and indexical meaning. Semantico-referential meaning
refers to the aspect of meaning, which describes events in the world that are independent of the
circumstance they are uttered in. An example would be propositions such as:
"Santa Claus eats cookies."
In this case, the proposition is describing that Santa Claus eats cookies. The meaning of this
proposition does not rely on whether or not Santa Claus is eating cookies at the time of its utterance.
Santa Claus could be eating cookies at any time and the meaning of the proposition would remain
the same. The meaning is simply describing something that is the case in the world. In contrast, the
proposition, "Santa Claus is eating a cookie right now," describes events that are happening at the
time the proposition is uttered.
Semantico-referential meaning is also present in meta-semantical statements such as:
Tiger: omnivorous, a mammal
Page | 116

If someone were to say that a tiger is an omnivorous animal in one context and a mammal in
another, the definition of tiger would still be the same. The meaning of the sign tiger is describing
some animal in the world, which does not change in either circumstance.
Indexical meaning, on the other hand, is dependent on the context of the utterance and has rules of
use. By rules of use, it is meant that indexicals can tell you when they are used, but not what they
actually mean.
Example: "I"
Whom "I" refers to depends on the context and the person uttering it.
As mentioned, these meanings are brought about through the relationship between the signified and
the signifier. One way to define the relationship is by placing signs in two categories: referential
indexical signs, also called "shifters," and pure indexical signs.
Referential indexical signs are signs where the meaning shifts depending on the context hence the
nickname "shifters." 'I' would be considered a referential indexical sign. The referential aspect of its
meaning would be '1st person singular' while the indexical aspect would be the person who is
speaking (refer above for definitions of semantico-referential and indexical meaning). Another
example would be:
"This"
Referential: singular count
Indexical: Close by
A pure indexical sign does not contribute to the meaning of the propositions at all. It is an example of
a ""non-referential use of language.""
A second way to define the signified and signifier relationship is C.S. Peirce's Peircean Trichotomy.
The components of the trichotomy are the following:
1. Icon: the signified resembles the signifier (signified: a dog's barking noise, signifier: bow-wow)
2. Index: the signified and signifier are linked by proximity or the signifier has meaning only because
it is pointing to the signified
3. Symbol: the signified and signifier are arbitrarily linked (signified: a cat, signifier: the word cat)
These relationships allow us to use signs to convey what we want to say. If two people were in a
room and one of them wanted to refer to a characteristic of a chair in the room he would say "this
chair has four legs" instead of "a chair has four legs." The former relies on context (indexical and
referential meaning) by referring to a chair specifically in the room at that moment while the latter is
independent of the context (semantico-referential meaning), meaning the concept chair.
Non-referential uses of language
Silverstein's "Pure" Indexes
Michael Silverstein has argued that "nonreferential" or "pure" indexes do not contribute to an
utterance's referential meaning but instead "signal some particular value of one or more contextual
variables." Although nonreferential indexes are devoid of semantico-referential meaning, they do
encode "pragmatic" meaning.
The sorts of contexts that such indexes can mark are varied. Examples include:
Sex indexes are affixes or inflections that index the sex of the speaker, e.g. the verb forms of female
Koasati speakers take the suffix "-s".
Deference indexes are words that signal social differences (usually related to status or age) between
the speaker and the addressee. The most common example of a deference index is the V form in a
Page | 117

language with a T-V distinction, the widespread phenomenon in which there are multiple secondperson pronouns that correspond to the addressee's relative status or familiarity to the speaker.
Honorifics are another common form of deference index and demonstrate the speaker's respect or
esteem for the addressee via special forms of address and/or self-humbling first-person pronouns.
An Affinal taboo index is an example of avoidance speech and produces and reinforces sociological
distance, as seen in the Aboriginal Dyirbal language of Australia. In this language and some others,
there is a social taboo against the use of the everyday lexicon in the presence of certain relatives
(mother-in-law, child-in-law, paternal aunt's child, and maternal uncle's child). If any of those
relatives are present, a Dyirbal speaker has to switch to a completely separate lexicon reserved for
that purpose.
In all of these cases, the semantico-referential meaning of the utterances is unchanged from that of
the other possible (but often impermissible) forms, but the pragmatic meaning is vastly different.
The Performative
Main articles: Performative utterance, Speech act theory
J.L. Austin introduced the concept of the Performative, contrasted in his writing with "constative"
(i.e. descriptive) utterances. According to Austin's original formulation, a performative is a type of
utterance characterized by two distinctive features:
It is not truth-evaluable (i.e. it is neither true nor false)
Its uttering performs an action rather than simply describing one
However, a performative utterance must also conform to a set of felicity conditions.
Examples:
"I hereby pronounce you man and wife."
"I accept your apology."
"This meeting is now adjourned."
Jakobson's Six Functions of Language
Roman Jakobson, expanding on the work of Karl Bhler, described six "constitutive factors" of a
speech event, each of which represents the privileging of a corresponding function, and only one of
which is the referential (which corresponds to the context of the speech event). The six constitutive
factors and their corresponding functions are diagrammed below.

The Six Constitutive Factors of a Speech Event


Context
Message
Addresser---------------------Addressee
Contact
Code

The Six Functions of Language


Referential

Page | 118

Poetic
Emotive-----------------------Conative
Phatic
Metalingual
The Referential Function corresponds to the factor of Context and describes a situation, object or
mental state. The descriptive statements of the referential function can consist of both definite
descriptions and deictic words, e.g. "The autumn leaves have all fallen now."
The Expressive (alternatively called "emotive" or "affective") Function relates to the Addresser and is
best exemplified by interjections and other sound changes that do not alter the denotative meaning
of an utterance but do add information about the Addresser's (speaker's) internal state, e.g. "Wow,
what a view!"
The Conative Function engages the Addressee directly and is best illustrated by vocatives and
imperatives, e.g. "Tom! Come inside and eat!"
The Poetic Function focuses on "the message for its own sake and is the operative function in poetry
as well as slogans.
The Phatic Function is language for the sake of interaction and is therefore associated with the
Contact factor. The Phatic Function can be observed in greetings and casual discussions of the
weather, particularly with strangers.
The Metalingual (alternatively called "metalinguistic" or "reflexive") Function is the use of language
(what Jakobson calls "Code") to discuss or describe itself.
Related fields
There is considerable overlap between pragmatics and sociolinguistics, since both share an interest
in linguistic meaning as determined by usage in a speech community. However, sociolinguists tend to
be more interested in variations in language within such communities.
Pragmatics helps anthropologists relate elements of language to broader social phenomena; it thus
pervades the field of linguistic anthropology. Because pragmatics describes generally the forces in
play for a given utterance, it includes the study of power, gender, race, identity, and their
interactions with individual speech acts. For example, the study of code switching directly relates to
pragmatics, since a switch in code effects a shift in pragmatic force.
According to Charles W. Morris, pragmatics tries to understand the relationship between signs and
their users, while semantics tends to focus on the actual objects or ideas to which a word refers, and
syntax (or "syntactics") examines relationships among signs or symbols. Semantics is the literal
meaning of an idea whereas pragmatics is the implied meaning of the given idea.
Speech Act Theory, pioneered by J.L. Austin and further developed by John Searle, centers around
the idea of the performative, a type of utterance that performs the very action it describes. Speech
Act Theory's examination of Illocutionary Acts has many of the same goals as pragmatics, as outlined
above.
Pragmatics in philosophy
Pragmatics (more specifically, Speech Act Theory's notion of the performative) underpins Judith
Butler's theory of gender performativity. In Gender Trouble, she claims that gender and sex are not
natural categories, but socially constructed roles produced by "reiterative acting."
In Excitable Speech she extends her theory of performativity to hate speech and censorship, arguing
that censorship necessarily strengthens any discourse it tries to suppress and therefore, since the

Page | 119

state has sole power to define hate speech legally, it is the state that makes hate speech
performative.
Jaques Derrida remarked that some work done under Pragmatics aligned well with the program he
outlined in his book Of Grammatology.
mile Benveniste argued that the pronouns "I" and "you" are fundamentally distinct from other
pronouns because of their role in creating the subject.
Gilles Deleuze and Felix Guattari discuss linguistic pragmatics in the fourth chapter of A Thousand
Plateaus ("November 20, 1923--Postulates of Linguistics"). They draw three conclusions from Austin:
(1) A performative utterance doesn't communicate information about an act second-handit is the
act; (2) Every aspect of language ("semantics, syntactics, or even phonematics") functionally interacts
with pragmatics; (3) There is no distinction between language and speech. This last conclusion
attempts to refute Saussure's division between langue and parole and Chomsky's distinction
between surface structure and deep structure simultaneously.
Significant works
J. L. Austin's How To Do Things With Words
Paul Grice's cooperative principle and conversational maxims
Brown & Levinson's Politeness Theory
Geoffrey Leech's politeness maxims
Levinson's Presumptive Meanings
Jrgen Habermas's universal pragmatics
Dan Sperber and Deirdre Wilson's relevance theory

Principles and Parameters


Principles and parameters is a framework in generative linguistics. Principles and parameters was
largely formulated by the linguists Noam Chomsky and Howard Lasnik. Today, many linguists have
adopted this framework, and it is considered the dominant form of mainstream generative
linguistics.
Framework
The central idea of principles and parameters is that a person's syntactic knowledge can be modelled
with two formal mechanisms:
A finite set of fundamental principles that are common to all languages; e.g., that a sentence must
always have a subject, even if it is not overtly pronounced.
A finite set of parameters that determine syntactic variability amongst languages; e.g., a binary
parameter that determines whether or not the subject of a sentence must be overtly pronounced
(this example is sometimes referred to as the Pro-drop parameter).
Within this framework, the goal of linguistics is to identify all of the principles and parameters that
are universal to human language (called: Universal Grammar). As such, any attempt to explain the
syntax of a particular language using a principle or parameter is cross-examined with the evidence
available in other languages. This leads to continual refinement of the theoretical machinery of
generative linguistics in an attempt to account for as much syntactic variation in human language as
possible.
Language acquisition
Page | 120

According to this framework, principles and parameters are part of a genetically innate universal
grammar (UG) which all humans possess, barring any genetic disorders. As such, principles and
parameters do not need to be learned by exposure to language. Rather, exposure to language merely
triggers the parameters to adopt the correct setting.
Recordings and transcriptions of natural, everyday types of conversation show a different picture of
what language looks like and how it is used. For example, while formal linguistics takes the sentence
to be the canonical unit of analysis, conversation analysis (CA) takes the turn at talk as canonical.
Speakers in conversation often do not use complete sentences or even complete words to converse.
Rather, discourse is composed of sequences of turns which are composed of turn constructional
units (e.g. a word, phrase, clause, sentence) . In CA, the form and meaning of an utterance is a
product of situated activity- which is to say meaning is highly contextual (within a social, interactive
context) and contingent upon how participants respond to each other regardless of grammatical
completeness of an utterance.
Other discourse and corpus linguistic analyses have found recursion and other forms of grammatical
complexity to be rather rare in spoken discourse (especially in preliterate societies) but common in
written discourse suggesting that much of grammatical complexity may in fact be a product of
literacy training
. Other critics point out that there is little if anything that can unequivocally be
called universal across the world's languages. Whereas Chomsky and other formal linguists have
painted language as a static, linear information transmission system, discourse analyses have focused
on the dynamic, dialogic, and social nature of language use in social situations .
Strong evidence from historical linguistics also suggests that grammar is an emergent property of
language use . Grammars change over time, sometimes gaining in complexity (e.g. emergence of the
future tense "gonna" in English; evolution of the triconsonental root in Semitic languages), other
times becoming simpler (e.g. loss of Latin case system in French, Italian, Spanish, etc.). Most often,
the structural changes to a language's grammar occur as the byproducts of other processes over the
course of many incremental and accumulative alterations to existing structure (e.g. by a series of
extensions of meaning of lexical items to new contexts which leads to grammaticalization). Hopper
has referred to grammar as layers of the sediments of language usage (ibid.). In other words,
grammars have historically emerged/evolved because of the interactions between accumulations of
changes in language use (author's note: Perhaps this unpredictability due to historical contingency is
one reason why there are so few -if any- grammatical universals.). Language evolution theorist,
Terrence Deacon notes that it is logically problematic to consider language structure as innate, that
is, as having been subject to the forces of natural selection, because languages change much too
quickly for natural selection to act upon them.
Another source of criticism is the binary nature of parameters in the framework. For example, the
linguist Larry Trask argues that the ergative case system of the Basque language is not a simple
binary parameter, and that different languages can have different levels of ergativity.
The influence of principles and parameters is most apparent in the works of linguists who subscribe
to the Minimalist Program, Noam Chomsky's most recent contribution to linguistics. This program of
research utilizes conceptions of economy to enhance the search for universal principles and
parameters. Linguists in this program assume that humans use as economic a system as possible in
their innate syntactic knowledge.
Examples
Examples of theorized principles are:
Structure preservation principle
Trace erasure principle
Projection principle
Page | 121

Examples of theorized parameters are:


Ergative case parameter
Head directionality parameter
Nominal mapping parameter
Null subject parameter
Polysynthesis parameter
Pro-drop parameter
Serial verb parameter
Subject placement parameter
Subject side parameter
Topic prominent parameter
Verb attraction parameter

Prosody (linguistics)
In linguistics, prosody (from Greek , prosida) is the rhythm, stress, and intonation of
speech. Prosody may reflect various features of the speaker or the utterance: the emotional state
of a speaker; whether an utterance is a statement, a question, or a command; whether the speaker
is being ironic or sarcastic; emphasis, contrast, and focus; or other elements of language that may
not be encoded by grammar or choice of vocabulary.
Acoustic attributes of prosody
In terms of acoustics, the prosodics of oral languages involve variation in syllable length, loudness,
pitch, and the formant frequencies of speech sounds. In cued speech and sign languages, prosody
involves the rhythm, length, and tension of gestures, along with mouthing and facial expressions.
Prosody is absent in writing, which is one reason e-mail, for example, may be misunderstood.
Orthographic conventions to mark or substitute for prosody include punctuation (commas,
exclamation marks, question marks, scare quotes, and ellipses), typographic styling for emphasis
(italic, bold, and underlined text), and emoticons.
The details of a language's prosody depend upon its phonology. For instance, in a language with
phonemic vowel length, this must be marked separately from prosodic syllable length. In similar
manner, prosodic pitch must not obscure tone in a tone language if the result is to be intelligible.
Although tone languages such as Mandarin have prosodic pitch variations in the course of a
sentence, such variations are long and smooth contours, on which the short and sharp lexical tones
are superimposed. If pitch can be compared to ocean waves, the swells are the prosody, and the
wind-blown ripples in their surface are the lexical tones, as with stress in English. The word dessert
has greater stress on the second syllable, compared to desert, which has greater stress on the first;
but this distinction is not obscured when the entire word is stressed by a child demanding "Give me
dessert!" Vowels in many languages are likewise pronounced differently (typically less centrally) in a
careful rhythm or when a word is emphasized, but not so much as to overlap with the formant
structure of a different vowel. Both lexical and prosodic information are encoded in rhythm,
loudness, pitch, and vowel formants.
The prosodic domain
Prosodic features are suprasegmental. They are not confined to any one segment, but occur in some
higher level of an utterance. These prosodic units are the actual phonetic "spurts", or chunks of
Page | 122

speech. They need not correspond to grammatical units such as phrases and clauses, though they
may; and these facts suggest insights into how the brain processes speech.
Prosodic units are marked by phonetic cues, such as a coherent pitch contour or the gradual
decline in pitch and lengthening of vowels over the duration of the unit, until the pitch and speed are
reset to begin the next unit. Breathing, both inhalation and exhalation, seems to occur only at these
boundaries where the prosody resets.
"Prosodic structure" is important in language contact and lexical borrowing. Linguist Ghil'ad
Zuckermann demonstrates that in "Israeli" (his term for Modern Hebrew), the XiXX verb-template is
much more productive than the XaXX verb-template because in morphemic adaptations of nonHebrew stems, the XiXX verb-template is more likely to retain in all conjugations throughout the
tenses the prosodic structure (e.g., the consonant clusters and the location of the vowels) of the
stem.
For example, the Israeli verb le-transfr "to transfer (people)" is fitted into the XiXX verb-template.
In the past (3rd person, masculine, singular) one says trinsfr, in the present metransfr and in the
future yetransfr. The consonant clusters of the stem transfer are kept throughout. Now, let's try to
fit the stem transfer into the XaXX verb-template, which in fact used to be the most productive one
in Classical Hebrew. The normal pattern can be seen in garmgormyigrm "cause" (past, present,
future). So, yesterday, he transfr "transferred (people)"; today, he tronsfr. So far so good; the
consonant clusters and the location of the vowels of transfer are maintained, the specific
characteristics of the vowels (e.g. whether they are a or i) being less important. However, the future
form, yitrnsfr, is impossible because among other things, lacking a vowel between the r and the n, it
violates the prosodic structure of the stem transfer.
According to Zuckermann, this is exactly why the stem click "select by pressing one of the buttons on
the computer mouse" was fitted into the hiXXX verb-template, resulting in hiklk rather than in the
XiXX (kilk) or XaXX (kalk) verb-templates. The form hiklk is the only one preserving the [kl]
cluster.
One important conclusion is that prosodic considerations supersede semantic ones. For example,
although hiXXX is historically the causative verb-template, it is employed on purely phonological
grounds in the intransitive hishvts "show off" (from Yiddish shvits) and in the ambitransitive (in
fact, usually intransitive) hiklk "click" (cf. English click).
Prosody and emotion
Emotional prosody is the expression of feelings using prosodic elements of speech. It was recognized
by Charles Darwin in The Descent of Man as predating the evolution of human language: "Even
monkeys express strong feelings in different tones anger and impatience by low, fear and pain by
high notes." Native speakers listening to actors reading emotionally neutral text while projecting
emotions correctly recognized happiness 62% of the time, anger 95%, surprise 91%, sadness 81%,
and neutral tone 76%. When a database of this speech was processed by computer, segmental
features allowed better than 90% recognition of happiness and anger, while suprasegmental
prosodic features allowed only 44%49% recognition. The reverse was true for surprise, which was
recognized only 69% of the time by segmental features and 96% of the time by suprasegmental
prosody. In typical conversation (no actor voice involved), the recognition of emotion may be quite
low, of the order of 50%, hampering the complex interrelationship function of speech advocated by
some authors.
Brain location of prosody
An aprosodia is an acquired or developmental impairment in comprehending or generating the
emotion conveyed in spoken language.

Page | 123

Producing these nonverbal elements requires intact motor areas of the face, mouth, tongue, and
throat. This area is associated with Brodmann areas 44 and 45 (Broca's area) of the left frontal lobe.
Damage to areas 44/45 produces motor aprosodia, with the nonverbal elements of speech being
disturbed (facial expression, tone, rhythm of voice).
Understanding these nonverbal elements requires an intact and properly functioning Brodmann area
22 (Wernicke's area) in the right hemisphere. Right-hemispheric area 22 aids in the interpretation of
prosody, and damage causes sensory aprosodia, with the patient unable to comprehend changes in
voice and body language.
Prosody is dealt with by a right-hemisphere network that is largely a mirror image of the left
perisylvian zone. Damage to the right inferior frontal gyrus causes a diminished ability to convey
emotion or emphasis by voice or gesture, and damage to right superior temporal gyrus causes
problems comprehending emotion or emphasis in the voice or gestures of others.
Saussurean Paradox
How can speakers continue to use a language effectively when that language is constantly changing?
The Swiss linguist Ferdinand de Saussure was the first to demonstrate that a language is not just a
collection of linguistic objects like speech sounds and words; instead, it is a highly structured system
in which each element is largely defined by the way it is related to other elements. This structuralist
view of language has dominated linguistic thinking ever since. But it immediately leads to a puzzle.
We know that every language is constantly changing. So: how can a language continue to be a
structured system of speech sounds, words, grammatical forms and sentence structures when all of
these are, at any given moment, in the middle of any number of changes currently in progress?
This paradox greatly puzzled linguists for generations. Today, though, we are well on the way to
resolving it at last. The key insight has come from the study of variation in language. Though variation
was formerly dismissed as peripheral and insignificant, we now realize that variation in fact forms a
large part of the very structure of any language, that speakers make use of that variation just as they
make use of other aspects of language structure, and that variation is the vehicle of change, as
speakers simply shift the frequencies of competing variant forms over time.

Sex Differences in Language


Differences between the speech of men and women. In some languages, there are very conspicuous
differences between mens and womens speech: men and women may use different words for the
same thing, they may use different grammatical endings, they may even use different sets of
consonants and vowels in their pronunciation. English has nothing quite so dramatic as this, but
several decades of research have turned up some interesting differences even in English though
not all of the early claims have been substantiated by later work. For example, it has been suggested
that women use more tag questions than men as in Its nice, isnt it? as if to seek approval for
their opinions, but this has not been borne out by investigation. It has also been suggested that men
swear more than women, but this too appears not to be so, at least among younger speakers.
On the other hand, it does appear to be true that certain words are more typical of women, including
terms of approval like cute, divine and adorable and specific colour terms like beige, burgundy and
ecru. Women have also been observed providing more backchannel (hmm, yeah, ah-ha), asking more
questions, inviting approval and agreement, and generally being more socially supportive in
conversation than men. The British sociolinguist Jennifer Coates and the American sociolinguist
Deborah Tannen have discovered that men and women organize their conversations very differently.
Men tend to be rather competitive; women engage in highly cooperative conversations. Men engage
in floor-holding; women latch on to topics, completing the speakers turn, or interject approval or
disagreement. That is, a conversation among women is a collaborative enterprise, with all the
Page | 124

women pulling together to construct a satisfactory discourse which is the product of all of them,
while a conversation among men is rather a sequence of individual efforts.
These differences can lead to serious misunderstanding in mixed conversations. While her husband
or boyfriend is speaking, a woman may constantly contribute supporting remarks in the normal
female fashion, but the man may well interpret these remarks as interruptions (which they are not)
and become very annoyed. In fact, it is quite clear that, in mixed conversations, men interrupt
women far more than the reverse.
It is clear that such differences in behaviour are indexes not directly of gender but of power. Men in
subservient job situations and women in powerful settings have been observed using the
powermarked pattern of discourse, regardless of gender.

Stylistics
Stylistics is the study of varieties of language whose properties position that language in context, and
tries to establish principles capable of accounting for the particular choices made by individuals and
social groups in their use of language. A variety, in this sense, is a situationally distinctive use of
language. For example, the language of advertising, politics, religion, individual authors, etc., or the
language of a period in time, all are used distinctively and belong in a particular situation. In other
words, they all have place or are said to use a particular 'style'.
Stylistics is a branch of linguistics, which deals with the study of varieties of language, its properties,
principles behind choice, dialogue, accent, length, and register.
Stylistics also attempts to establish principles capable of explaining the particular choices made by
individuals and social groups in their use of language, such as socialisation, the production and
reception of meaning, critical discourse analysis and literary criticism.
Other features of stylistics include the use of dialogue, including regional accents and peoples
dialects, descriptive language, the use of grammar, such as the active voice or passive voice, the
distribution of sentence lengths, the use of particular language registers, etc.
Many linguists do not like the term stylistics. The word style, itself, has several connotations that
make it difficult for the term to be defined accurately. However, in Linguistic Criticism, Roger Fowler
makes the point that, in non-theoretical usage, the word stylistics makes sense and is useful in
referring to an enormous range of literary contexts, such as John Miltons grand style, the prose
style of Henry James, the epic and ballad style of classical Greek literature, etc. (Fowler. 1996,
185). In addition, stylistics is a distinctive term that may be used to determine the connections
between the form and effects within a particular variety of language. Therefore, stylistics looks at
what is going on within the language; what the linguistic associations are that the style of language
reveals.
The situation in which a type of language is found can usually be seen as appropriate or
inappropriate to the style of language used. A personal love letter would probably not be a suitable
location for the language of this article. However, within the language of a romantic correspondence
there may be a relationship between the letters style and its context. It may be the authors
intention to include a particular word, phrase or sentence that not only conveys their sentiments of
affection, but also reflects the unique environment of a lovers romantic composition. Even so, by
using so-called conventional and seemingly appropriate language within a specific context
(apparently fitting words that correspond to the situation in which they appear) there exists the
possibility that this language may lack exact meaning and fail to accurately convey the intended
message from author to reader, thereby rendering such language obsolete precisely because of its
conventionality. In addition, any writer wishing to convey their opinion in a variety of language that
Page | 125

they feel is proper to its context could find themselves unwittingly conforming to a particular style,
which then overshadows the content of their writing.

Suprasegmental
An aspect of pronunciation whose description requires a longer sequence than a single consonant
or vowel. Though phoneticians and linguists had earlier been aware of the importance of
suprasegmental phenomena in speech, the term suprasegmental was coined by the American
structuralists in the 1940s. It covers several rather diverse phenomena.
A very obvious suprasegmental is intonation, since an intonation pattern by definition extends over a
whole utterance or a sizable piece of an utterance. Also clearly suprasegmental is the kind of wordaccent called pitch accent, as in Japanese and Basque, in which an accentual pitch pattern extends
over an entire word. Less obvious is stress, but not only is stress a property of a whole syllable but
the stress level of a syllable can only be determined by comparing it with neighbouring syllables
which have greater or lesser degrees of stress.
The tones of tone languages are also suprasegmental, since not only does a tone fall on a whole
syllable but tonal differences like high and low can only be identified by comparing syllables with
neighbouring syllables.
The American structuralists also treated juncture phenomena as suprasegmental. Differences in
juncture are the reason that night rate does not sound like nitrate, or why choose like white shoes,
and why the consonants in the middle of pen-knife and lamp-post are the way they are. Since these
items contain essentially the same sequences of segments, the junctural differences have to be
described in terms of different juncture placement within sequences of segments.
In most of these cases, the phonetic realization of the suprasegmental actually extends over more
than one segment, but the key point is that, in all of them, the description of the suprasegmental
must involve reference to more than one segment. Some people use prosody as a synonym for
suprasegmental, but phonologists prefer the latter term.

Summative Assessment
Summative assessment (or Summative evaluation) refers to the assessment of the learning and
summarizes the development of learners at a particular time. After a period of work, e.g. a unit for
two weeks, the learner sits for a test and then the teacher marks the test and assigns a score. The
test aims to summarize learning up to that point. The test may also be used for diagnostic
assessment to identify any weaknesses and then build on that using formative assessment.
Summative assessment is commonly used to refer to assessment of educational faculty by their
respective supervisor. It is imposed onto the faculty member, and uniformly applied, with the object
of measuring all teachers on the same criteria to determine the level of their performance. It is
meant to meet the school or district's needs for teacher accountability and looks to provide
remediation for sub-standard performance and also provides grounds for dismissal if necessary. The
evaluation usually takes the shape of a form, and consists of check lists and occasionally narratives.
Areas evaluated include classroom climate, instruction, professionalism, and planning and
preparation.
Summative assessment is characterized as assessment of learning and is contrasted with formative
assessment, which is assessment for learning.
It provides information on the product's efficacy (its ability to do what it was designed to do). For
example, did the learners learn what they were supposed to learn after using the instructional
module. In a sense, it does not bother to assess "how they did," but more importantly, by looking at
Page | 126

how the learners performed, it provides information as to whether the product teaches what it is
supposed to teach.
It tends to use well defined evaluation designs. [i.e. fixed time and content]
It provides descriptive analysis. [i.e. in order to give a grade, all the activities done throughout the
year are taken into account]
It tends to stress local effects.
It is unoppressive and not reactive as far as possible.
It is positive, tending to stress what students can do rather than what they cannot.

Page | 127

Transformational Grammar
A particular type of generative grammar. In the 1950s, Noam Chomsky introduced into linguistics the
notion of a generative grammar, which has proved to be very influential. Now there are very many
different types of generative grammar which can be conceived of, and Chomsky himself defined and
discussed several quite different types in his early work. But, from the beginning, he himself favoured
a particular type, to which he gave the name transformational grammar, or TG; TG has sometimes
also been called transformational generative grammar.
Most types of generative grammar in which anybody has ever been interested can be usefully viewed
as working like this: starting with nothing, the rules of the grammar build up the structure of a
sentence piece by piece, adding something at each step, until the sentence structure is complete.
Crucially, once something has been added to a sentence structure, it must remain: it cannot be
changed, deleted or moved to a different location.
TG is hugely different. In TG, the structure of a sentence is first built up in the manner just described,
using only context-free rules, which are a simple type of rule widely used in other types of generative
grammar. The structure which results is called the deep structure of the sentence. But, after this,
some further rules apply.
These rules are called transformations, and they are different in nature. Transformations have the
power to change the structure which is already present in a number of ways: not only can they add
new material to the structure (though only in the early versions), but they can also change material
which is already present in various ways, they can move material to a different location, and they can
even delete material from the structure altogether. When all the relevant transformations have
finished applying, the resulting structure is the surface structure of the sentence. Because of the vast
power of transformations, the surface structure may look extremely different from the deep
structure.
TG is thus a theory of grammar which holds that a sentence typically has more than one level of
structure. Apart from the structure which it obviously has on the surface, it also has an abstract
underlying structure (the deep structure) which may be substantially different. The point of all this, in
Chomskys view, is that certain important generalizations about the structures of the sentences in a
language may be stated far more easily in terms of abstract deep structures than otherwise; in
addition, the meaning of a sentence can often be determined much more straightforwardly from its
deep structure.
TG has developed through a number of versions, each succeeding the other. In his 1957 book
Syntactic Structures, Chomsky provided only a partial sketch of a very simple type of
transformational grammar. This proved to be inadequate, and, in his 1965 book
Aspects of the Theory of Syntax, Chomsky proposed a very different, and much more complete,
version. This version is variously known as the Aspects model or as the Standard Theory. All
textbooks of TG published before 1980 (and a few of those published more recently) present what is
essentially the Standard Theory, sometimes with a few additions from later work.
Around 1968 the Standard Theory came under attack from a group of younger linguists who hoped
to equate deep structure, previously a purely syntactic level of representation, with the semantic
structure of a sentence (its meaning). This programme, called Generative Semantics, led to the
positing of ever more abstract underlying structures for sentences; it proved unworkable, and it
finally collapsed. Around the same time, two mathematical linguists demonstrated that standard TG
was so enormously powerful that it could, in principle, describe anything which could be described at
all a potentially catastrophic result, since the whole point of a theory of grammar is to tell us what
is possible in languages and what is not possible. Yet these Peters-Ritchie results suggested that
TG was placing no constraints at all on what the grammar of a human language could be like.
Page | 128

Chomsky responded to all this in the early 1970s by introducing a number of changes to his
framework; the result became known as the Extended Standard Theory, or EST. By the late 1970s
further changes had led to a radically different version dubbed the Revised Extended Standard
Theory, or REST. Among the major innovations of the REST were the introduction of traces, invisible
flags marking the former positions of elements which had been moved, a reduction in the number of
distinct transformations from dozens to just two, and a switch of attention away from the
transformations themselves to the constraints which applied to them.
But Chomsky continued to develop his ideas, and in 1981 he published Lectures on Government and
Binding; this book swept away much of the apparatus of the earlier transformational theories in
favour of a dramatically different, and far more complex, approach called Government-and-Binding
Theory, or GB. GB retains exactly one transformation, and, in spite of the obvious continuity between
the new framework and its predecessors, the name transformational grammar is not usually applied
to GB or to its even more recent successor, the minimalist programme. Hence, for purposes of
linguistic research, transformational grammar may now be regarded as dead, though its influence
has been enormous, and its successors are maximally prominent.

Triangulation
In the social sciences, triangulation is often used to indicate that more than two methods are used in
a study with a view to double (or triple) checking results. This is also called "cross examination".
The idea is that one can be more confident with a result if different methods lead to the same result.
If an investigator uses only one method, the temptation is strong to believe in the findings. If an
investigator uses two methods, the results may well clash. By using three methods to get at the
answer to one question, the hope is that two of the three will produce similar answers, or if three
clashing answers are produced, the investigator knows that the question needs to be reframed,
methods reconsidered, or both.
Triangulation in research
Triangulation is a powerful technique that facilitates validation of data through cross verification
from more than two sources. in particular it refers to the application and combination of several
research methodologies in the study of the same phenomenon.
It can be employed in both quantitative (validation) and qualitative (inquiry) studies.
It is a method-appropriate strategy of founding the credibility of qualitative analyses.
It becomes an alternative to traditional criteria like reliability and validity
It is the preferred line in the social sciences
By combining multiple observers, theories, methods, and empirical materials, researchers can hope
to overcome the weakness or intrinsic biases and the problems that come from single method,
single-observer and single-theory studies.
Purpose
The purpose of triangulation in qualitative research is to increase the credibility and validity of the
results. Several scholars have aimed to define triangulation throughout the years.
Cohen and Manion (1986) define triangulation as an "attempt to map out, or explain more fully, the
richness and complexity of human behavior by studying it from more than one standpoint."
Altrichter et al. (1996) contend that triangulation "gives a more detailed and balanced picture of the
situation."

Page | 129

According to ODonoghue and Punch (2003), triangulation is a method of cross-checking data from
multiple sources to search for regularities in the research data."

Types
Denzin (1978) identified four basic types of triangulation:
Data triangulation: involves time, space, and persons
Investigator triangulation: involves multiple researchers in an investigation
Theory triangulation: involves using more than one theoretical scheme in the interpretation of the
phenomenon
Methodological triangulation: involves using more than one method to gather data, such as
interviews, observations, questionnaires, and documents.

Order of acquisition
Researchers have found a very consistent order in the acquisition of first language structures by
children, and this has drawn a great deal of interest from SLA scholars. Considerable effort has
been devoted to testing the "identity hypothesis", which asserts that first-language and secondlanguage acquisition conform to the same patterns. This has not been confirmed, perhaps because
second-language learners' cognitive and affective states are so much more advanced, and perhaps
because it is not true. Orders of acquisition in SLA often resemble those found in first language
acquisition, and may have common neurological causes, but there is no convincing evidence for this.
It is not safe to say that the order of L1 acquistion has any easy implications for SLA.
Some research suggests that most learners begin their acquisition process with a "silent period", in
which they speak very little if at all. It is said that for some, this is a period of language shock, in
which the learner actively rejects the incomprehensible input of the new language. However,
research has shown that many "silent" learners are engaging in private speech (sometimes called
"self-talk"). While appearing silent, they are rehearsing important survival phrases and lexical chunks.
These memorized phrases are then employed in the subsequent period of formulaic speech.
Whether by choice or compulsion, other learners have no silent period and pass directly to formulaic
speech. This speech, in which a handful of routines is used to accomplish basic purposes, often shows
few departures from L2 morphosyntax. It eventually gives way to a more experimental phase of
acquisition, in which the semantics and grammar of the target language are simplified and the
learners begin to construct a true interlanguage.

The nature of the transition between formulaic and simplified speech is disputed. Some, including
Krashen, have argued that there is no cognitive relationship between the two, and that the transition
is abrupt. Thinkers influenced by recent theories of the lexicon have preferred to view even native
speaker speech as heavily formulaic, and interpret the transition as a process of gradually developing
a broader repertoire of chunks and a deeper understanding of the rules which govern them. Some
studies have supported both views, and it is likely that the relationship depends in great part on the
learning styles of individual learners.
A flurry of studies took place in the 1970s, examining whether a consistent order of morpheme
acquisition could be shown. Most of these studies did show fairly consistent orders of acquisition for
selected morphemes. For example, among learners of English the cluster of features including the
suffix "-ing", the plural, and the copula were found to consistently precede others such as the article,
auxiliary, and third person singular. However, these studies were widely criticized as not paying
Page | 130

sufficient attention to overuse of the features (idiosyncratic uses outside what are obligatory
contexts in the L2), and sporadic but inconsistent use of the features. More recent scholarship
prefers to view the acquisition of each linguistic feature as a gradual and complex process. For that
reason most scholarship since the 1980s has focused on the sequence, rather than the order, of
feature acquisition.

Poverty of the stimulus


The poverty of the stimulus (POTS) argument is a variant of the epistemological problem of the
indeterminacy of data to theory that claims that grammar is unlearnable given the linguistic data
available to children. As such, the argument strikes against empiricist accounts of language
acquisition. Inversely, the argument is usually construed as in favor of linguistic nativism because it
leads to the conclusion that knowledge of some aspects of grammar must be innate. Nativists claim
that humans are born with a specific representational adaptation for language that both funds and
limits their competence to acquire specific types of natural languages over the course of their
cognitive development and linguistic maturation. The basic idea informs the teachings of Socrates,
Plato, and the Pythagoreans, pervades the work of the Cartesian linguists and Wilhelm von
Humboldt, and surfaces again with the contemporary linguistic theories of Noam Chomsky. The
argument is now generally used to support theories and hypotheses of generative grammar. The
name was coined by Chomsky in his work Rules and Representations. The thesis emerged out of
several of Chomsky's writings on the issue of language acquisition. The argument has long been
controversial within linguistics, forming the empirical backbone for the theory of universal grammar.
Summary
Though Chomsky and his supporters have reiterated the argument in a variety of different manners
(indeed Pullum and Scholz (2002a) provide no less than 13 different "subarguments" that can
optionally form part of a poverty-of-stimulus argument), one frequent structure to the argument can
be summed up as follows:
Premises:
There are patterns in all natural languages (i.e. human languages) that cannot be learned by children
using positive evidence alone. Positive evidence is the set of grammatical sentences the language
learner has access to, that is, by observing the speech of others. Negative evidence, on the other
hand, is the evidence available to the language learner about what is not grammatical. For
instance, when a parent corrects a child's speech, the child acquires negative evidence.
Children are only ever presented with positive evidence for these particular patterns. For example,
they only hear others speaking using sentences that are "right", not those that are "wrong".
Children do learn the correct grammars for their native languages.
Conclusion: Therefore, human beings must have some form of innate linguistic capacity that
provides additional knowledge to language learners.
Evidence
For the argument
The validity of the argument itself is unquestioned. Very few people, if any, would argue that
Chomsky's conclusion doesn't follow from his premises. Thus, anyone who accepts the first three
propositions must accept the conclusion. Many linguists accept all of the premises and consider
there to be quite a bit of evidence for them.
Several patterns in language have been claimed to be unlearnable from positive evidence alone. One
example is the hierarchical nature of languages. The grammars of human languages produce
Page | 131

hierarchical tree structures and some linguists argue that human languages are also capable of
infinite recursion (see Context-free grammar). For any given set of sentences generated by a
hierarchical grammar capable of infinite recursion there are an indefinite number of grammars that
could have produced the same data. This would make learning any such language impossible. Indeed,
a proof by E. Mark Gold showed that any formal language that has hierarchical structure capable of
infinite recursion is unlearnable from positive evidence alone, in the sense that it is impossible to
formulate a procedure that will discover with certainty the correct grammar given any arbitrary
sequence of positive data in which each utterance occurs at least once. However, this does not
preclude arriving at the correct grammar using typical input sequences rather than particularly
malicious sequences or arrive at an almost perfect approximation to the correct grammar. Indeed, it
has been proved that under very mild assumptions (ergodicity and stationarity), the probability of
producing a sequence that renders language learning impossible is in fact zero.
Another example of language pattern claimed to be unlearnable from positive evidence alone is
subject-auxiliary inversion in questions, i.e.:
You are happy.
Are you happy?
There are two hypotheses the language learner might postulate about how to form questions: (1)
The first auxiliary verb in the sentence (here: 'are') moves to the beginning of the sentence, or (2) the
'main' auxiliary verb in the sentence moves to the front. In the sentence above, both rules yield the
same result since there is only one auxiliary verb. But, you can see the difference in this case:
Anyone who is interested can see me later.
Is anyone who interested can see me later?
Can anyone who is interested see me later?
Of course, the result of rule (1) is ungrammatical while the result of rule (2) is grammatical. So, rule
(2) is (approximately) what we actually have in English, not rule (1). The claim, then, first is that
children don't see sentences as complicated as this one enough to witness a case where the two
hypotheses yield different results, and second that just based on the positive evidence of the simple
sentences, children could not possibly decide between (1) and (2). Moreover, even sentences such as
(1) and (2) are compatible with a number of incorrect rules (such as "front any auxiliary). Thus, if rule
(2) was not innately known to infants, we would expect half of the adult population to use (1) and
half to use (2). Since that doesn't occur, rule (2) must be innately known.
The last premise, that children successfully learn language, is considered to be evident in human
speech. Though people occasionally make mistakes, human beings rarely speak ungrammatical
sentences, and generally do not label them as such when they say them. (Ungrammatical in the
descriptive sense, not the prescriptive sense.)
That many linguists accept all three of the premises is testimony to Chomsky's influence in the
discipline, and the persuasiveness of the argument. Nonetheless, the APS has many critics, both
inside and outside linguistics.
Against the argument
Though some recognize the theory as valid, the soundness of the poverty of stimulus argument is
widely questioned. Indeed, every one of the three premises of the argument has been questioned at
some point in time. Much of the criticism comes from researchers who study language acquisition
and computational linguistics. Additionally, connectionist researchers have never accepted most of
Chomsky's premises, because these premises are at odds with connectionist beliefs about the
structure of cognition.

Page | 132

The first and most common critique, is that positive evidence is actually enough to learn the various
patterns that linguists claim are unlearnable by positive evidence alone. A common argument is that
the brain's mechanisms of statistical pattern recognition could solve many of the imagined
difficulties. For example, researchers using neural networks and other statistical methods have
programmed computers to learn rules such as (2) cited above, and have claimed to have successfully
extracted hierarchical structures, all using positive evidence alone. Indeed, Klein & Manning (2002)
report constructing a computer program that is able to retrieve 80% of all correct syntactic analyses
of text in the Wall Street Journal Corpus using a statistical learning mechanism (unsupervised
grammar induction) demonstrating a clear move away from "toy" grammars.
There is also much criticism about whether negative evidence is really so rarely encountered by
children. Pullum argues that learners probably do get certain kinds of negative evidence. In addition,
if one allows for statistical learning, negative evidence is abundant. Consider that if a language
pattern is never encountered, but its probability of being encountered would be very high were it
acceptable, then the language learner might be right in considering absence of the pattern as
negative evidence. Chomsky accepts that this kind of negative evidence plays a role in language
acquisition, terming it "indirect negative evidence", though he does not think that indirect negative
evidence is sufficient for language acquisition to proceed without Universal Grammar. However,
contra this claim, Ramscar and Yarlett (2007) designed a learning model that successfully simulates
the learning of irregular plurals based on negative evidence, and backed the predictions of this
simulation in empirical tests of young children. Ramscar and Yarlett suggest that failures of
expectation function as forms of implicit negative feedback that allow children to correct their errors.
As for the argument based on Gold's proof, it's not clear that human languages are truly capable of
infinite recursion. Clearly, no speaker can ever in fact produce a sentence with an infinite recursive
structure, and in certain cases (for example, center embedding), people are unable to comprehend
sentences with only a few levels of recursion. Chomsky and his supporters have long argued that
such cases are best explained by restrictions on working memory, since this provides a principled
explanation for limited recursion in language use. Some critics argue that this removes the
falsifiability of the premise.[citation needed] Returning to the big picture, it is questionable whether
Gold's research actually bears on the question of natural language acquisition at all, since what Gold
showed is that there are certain classes of formal languages for which some language in the class
cannot be learned given positive evidence alone. It's not at all clear that natural languages fall in such
a class, let alone whether they are the ones that are not learnable.
Another important criticism pertains to specifying the conclusion: "some form of innate linguistic
capacity." All theories of language ability assume some biological component, ranging from the
extensive, abstract principles of Chomsky's Universal Grammar to cognitive processes that are not
specific to language (e.g., forming perceptual and conceptual categories, processing that involves
statistically-based distributional analyses, and creating analogies across constructions) . Thus, there
are many different interpretations of the outcome of the POS argument. For example, The Oxford
Handbook of Linguistic Analysis provides summaries of 32 different theories, most of which are very
different than Chomsky's version of generative grammar. If one does not adopt Chomsky's linguistic
theory, some of the evidence for the POS argument disappears. For example, the argument that
children must choose between two different rules for "moving" an auxiliary verb does not exist
within most grammatical theories. The argument exists only if one adopts a transformational or
derivational theory that chooses one structure (i.e. declarative) as "underlying" and then abstractly
forms other structures (e.g. questions) by moving elements such as auxiliary verbs. However, many
of the theories in the OHLA are monostratal or constraint-based and do not involve movement (e.g.,
Cognitive Grammar, Construction Grammar, Usage-Based Theory). An additional criticism of the
subject-auxiliary inversion argument is the manner in which it has been expressed: "There are two
hypotheses the language learner might postulate." This statement implies that preschool children are
postulating and choosing between complex hypotheses about syntactic structures. This "child as
Page | 133

linguist" claim is an imprecise metaphor that does not contribute to our understanding of
developmental psychology. In sum, the Poverty of Stimulus argument has essentially no value within
modern psychology and linguistics: Everyone agrees that there is some biological component to
language learning. The arguments that are of much greater importance are those that lead
researchers to choose among, for example, the 32 linguistic theories in the OHLA.
Finally, it has been argued that people may not learn the exact same grammars as each other. If this
is the case, then only a weak version of the third premise is true, as there would be no fully "correct"
grammar to be learned. However, in many cases, Poverty of Stimulus arguments do not in fact
depend on the assumption that there is only one correct grammar, but rather that there is only one
correct class of grammars. For example, the Poverty of Stimulus argument from question formation
depends only on the assumption that everyone learns a structure-dependent grammar.

Problem-based learning
Problem-based learning (PBL) is a student-centered instructional strategy in which students
collaboratively solve problems and reflect on their experiences. It was pioneered and used
extensively at McMaster University, Hamilton, Ontario, Canada. The Materials department at
Queen Mary, University of London was the first Materials department in the UK to introduce PBL.
Characteristics of PBL are:
Learning is driven by challenging, open-ended problems.
Students work in small collaborative groups.
Teachers take on the role as "facilitators" of learning.
Accordingly, students are encouraged to take responsibility for their group and organize and direct
the learning process with support from a tutor or instructor. Advocates of PBL claim it can be used to
enhance content knowledge and foster the development of communication, problem-solving, and
self-directed learning skill.
Evidence supporting problem-based learning
Hmelo-Silver, Duncan, & Chinn cite several studies supporting the success of the constructivist
problem-based and inquiry learning methods. For example, they describe a project called GenScope,
an inquiry-based science software application. Students using the GenScope software showed
significant gains over the control groups, with the largest gains shown in students from basic courses.

Hmelo-Silver et al. also cite a large study by Geier on the effectiveness of inquiry-based science for
middle school students, as demonstrated by their performance on high-stakes standardized tests.
The improvement was 14% for the first cohort of students and 13% for the second cohort. This study
also found that inquiry-based teaching methods greatly reduced the achievement gap for AfricanAmerican students.
A systematic review of the effects of problem-based learning in medical school on the performance
of doctors after graduation showed clear positive effects on physician competence. This effect was
especially strong for social and cognitive competencies such as coping with uncertainty and
communication skills.
Examples of applying Problem-Based Learning pedagogy to curriculum
Republic Polytechnic (RP) is attempting to implement problem-based learning to all its courses in
various fields - applied science, technology for the arts, engineering, sports, health and leisure,
infocomm technology, hospitality, and communication. Since inception in 2002, the polytechnic in
Singapore has adopted the pedagogy and customized it to support learning in a One-Day OnePage | 134

ProblemTM framework. Students in a class of not more than 25 are presented a problem likely to
happen in a real scenario. A facilitator guides the students through three meetings throughout the
day and helps with discussions and generating problem-solving skills. In the third meeting, students
team up in groups of five, present their findings and suggest ways to solve the problem. The
facilitator explains the 'ideal' solution after the students have all presented and students are
encouraged to raise their opinions. Students are graded daily in this continuous assessment system.
Three understanding tests are conducted in one semester.
In Malaysia, an attempt is being made to introduce a hybrid of problem-based learning in secondary
mathematics called PBL4C, which stands for problem-based learning the four core areas in the
mathematics education framework. These core areas are mathematics contents, thinking processes,
skills, & values, with the student as the focus of learning. This hybrid first sprouted in SEAMEO
RECSAM in 2008.
Several medical schools have incorporated problem-based learning into their curricula, using real
patient cases to teach students how to think like a clinician. More than eighty percent of medical
schools in the United States now have some form of problem-based learning in their programs.
Constructivism and PBL
From a constructivist perspective Problem-based learning (PBL), the role of the instructor is to guide
the learning process rather than provide knowledge (Hmelo-Silver & Barrows, 2006). From this
perspective, feedback and reflection on the learning process and group dynamics are essential
components of PBL.
Criticisms of Problem-based learning
Problem-based learning and cognitive load
Sweller and others have published a series of studies over the past twenty years that is relevant to
problem-based learning but concerning cognitive load and what they describe as the guidance-fading
effect (Sweller, 2006). Sweller, et al. conducted several classroom-based studies with students
studying algebra problems (Sweller, 1988). These studies have shown that active problem solving
early in the learning process, is a less effective instructional strategy than studying worked examples
(Sweller and Cooper, 1985; Cooper and Sweller, 1987). Certainly active problem solving is useful as
learners become more competent, and better able to deal with their working memory limitations.
But early in the learning process, learners may find it difficult to process a large amount of
information in a short amount of time. Thus the rigors of active problem solving may become an
issue for novices. Once learners gain expertise the scaffolding inherent in problem-based learning
helps learners avoid these issues.
Sweller (1988) proposed cognitive load theory to explain how novices react to problem solving during
the early stages of learning. Sweller, et al. suggests a worked example early, and then a gradual
introduction of problems to be solved. They propose other forms of learning early in the learning
process (worked example, goal free problems, etc.); to later be replaced by completions problems,
with the eventual goal of solving problems on their own (Sweller, Van Merrienboer, & Paas, 1998).
This problem based learning becomes very useful later in the learning process.
Many forms of scaffolding have been implemented in problem based learning to reduce the cognitive
load of learners. These are most useful to fade guidance during problem solving. As an example,
consider the fading effect helps learners to slowly transit from studying examples to solving
problems. In this case backwards fading was found to be quite effective.
Cognitive effects of problem-based learning
The acquisition and structuring of knowledge in PBL is thought to work through the following
cognitive effects (Schmidt, 1993):
Page | 135

initial analysis of the problem and activation of prior knowledge through small-group discussion
elaboration on prior knowledge and active processing of new information
restructuring of knowledge, construction of a semantic network
social knowledge construction
learning in context
stimulation of curiosity related to presentation of a relevant problem

Page | 136

Speech Act
Speech act is a technical term in linguistics and the philosophy of language. Precise conceptions vary.
In general, "speech act" refers to the act of successfully communicating an intended understanding
to the listener. Speech acts also include greetings, criticism, invitations, congratulations etc.
This is different from the term "locutionary act", which is the specific and physical act of creating an
utterance. While a locutionary act is required for a speech act to occur, the locutionary act
represents simply the physical sounds uttered, or the scribbles written, which will then need
interpretation. Because of this difference, "speech act" is often considered a synonym for John
Searle's term "illocutionary act", explained below.
Speech act as an illocutionary act
The concept of an illocutionary act is central to, if not identical with, the concept of a speech act.
Although there are numerous opinions as to what 'illocutionary acts' actually are, there are some
kinds of acts which are widely accepted as illocutionary, as for example promising, ordering
someone, and bequeathing.
Following the usage of, for example, John R. Searle, "speech act" is often meant to refer just to the
same thing as the term illocutionary act, which John L. Austin had originally introduced in How to Do
Things with Words (published posthumously in 1962).
According to Austin's preliminary informal description, the idea of an "illocutionary act" can be
captured by emphasising that "by saying something, we do something", as when someone orders
someone else to go by saying "Go!", or when a minister joins two people in marriage saying, "I now
pronounce you husband and wife." (Austin would eventually define the "illocutionary act" in a more
exact manner.)
Austin distinguishes between illocutionary and perlocutionary speech acts. An interesting type of
illocutionary speech act is that performed in the utterance of what Austin calls performatives, typical
instances of which are "I nominate John to be President", "I sentence you to ten years'
imprisonment", or "I promise to pay you back." In these typical, rather explicit cases of performative
sentences, the action that the sentence describes (nominating, sentencing, promising) is performed
by the utterance of the sentence itself.
Examples
Greeting (in saying, "Hi John!", for instance), apologizing ("Sorry for that!"), describing something
("It is snowing"), asking a question ("Is it snowing?"), making a request and giving an order ("Could
you pass the salt?" and "Drop your weapon or I'll shoot you!"), or making a promise ("I promise I'll
give it back") are typical examples of "speech acts" or "illocutionary acts".
In saying, "Watch out, the ground is slippery", Mary performs the speech act of warning Peter to be
careful.
In saying, "I will try my best to be at home for dinner", Peter performs the speech act of promising
to be at home in time.
In saying, "Ladies and gentlemen, please give me your attention", Mary requests the audience to be
quiet.
In saying, "Race with me to that building over there!", Peter challenges Mary.
Classifying illocutionary speech acts
Searle (1975) has set up the following classification of illocutionary speech acts:
assertives = speech acts that commit a speaker to the truth of the expressed proposition

Page | 137

directives = speech acts that are to cause the hearer to take a particular action, e.g. requests,
commands and advice
commissives = speech acts that commit a speaker to some future action, e.g. promises and oaths
expressives = speech acts that expresses on the speaker's attitudes and emotions towards the
proposition, e.g. congratulations, excuses and thanks
declarations = speech acts that change the reality in accord with the proposition of the declaration,
e.g. baptisms, pronouncing someone guilty or pronouncing someone husband and wife
Indirect speech acts
In the course of performing speech acts we ordinarily communicate with each other. The content of
communication may be identical, or almost identical, with the content intended to be
communicated, as when a stranger asks, "What is your name?"
However, the meaning of the linguistic means used (if ever there are linguistic means, for at least
some so-called "speech acts" can be performed non-verbally) may also be different from the content
intended to be communicated. One may, in appropriate circumstances, request Peter to do the
dishes by just saying, "Peter ...!", or one can promise to do the dishes by saying, "Me!" One common
way of performing speech acts is to use an expression which indicates one speech act, and indeed
performs this act, but also performs a further speech act, which is indirect. One may, for instance,
say, "Peter, can you open the window?", thereby asking Peter whether he will be able to open the
window, but also requesting that he do so. Since the request is performed indirectly, by means of
(directly) performing a question, it counts as an indirect speech act.
Indirect speech acts are commonly used to reject proposals and to make requests. For example, a
speaker asks, "Would you like to meet me for coffee?" and another replies, "I have class." The second
speaker used an indirect speech act to reject the proposal. This is indirect because the literal
meaning of "I have class" does not entail any sort of rejection.
This poses a problem for linguists because it is confusing (on a rather simple approach) to see how
the person who made the proposal can understand that his proposal was rejected. Following
substantially an account of H. P. Grice, Searle suggests that we are able to derive meaning out of
indirect speech acts by means of a cooperative process out of which we are able to derive multiple
illocutions; however, the process he proposes does not seem to accurately solve the problem.
Sociolinguistics has studied the social dimensions of conversations. This discipline considers the
various contexts in which speech acts occur.
John Searle's theory of "indirect speech acts"
Searle has introduced the notion of an 'indirect speech act', which in his account is meant to be,
more particularly, an indirect 'illocutionary' act. Applying a conception of such illocutionary acts
according to which they are (roughly) acts of saying something with the intention of communicating
with an audience, he describes indirect speech acts as follows: "In indirect speech acts the speaker
communicates to the hearer more than he actually says by way of relying on their mutually shared
background information, both linguistic and nonlinguistic, together with the general powers of
rationality and inference on the part of the hearer." An account of such act, it follows, will require
such things as an analysis of mutually shared background information about the conversation, as well
as of rationality and linguistic conventions.
In connection with indirect speech acts, Searle introduces the notions of 'primary' and 'secondary'
illocutionary acts. The primary illocutionary act is the indirect one, which is not literally performed.
The secondary illocutionary act is the direct one, performed in the literal utterance of the sentence
(Searle 178). In the example:
(1) Speaker X: "We should leave for the show or else well be late."
Page | 138

(2) Speaker Y: "I am not ready yet."


Here the primary illocutionary act is Y's rejection of X's suggestion, and the secondary illocutionary
act is Y's statement that she is not ready to leave. By dividing the illocutionary act into two subparts,
Searle is able to explain that we can understand two meanings from the same utterance all the while
knowing which is the correct meaning to respond to.

With his doctrine of indirect speech acts Searle attempts to explain how it is possible that a speaker
can say something and mean it, but additionally mean something else. This would be impossible, or
at least it would be an improbable case, if in such a case the hearer had no chance of figuring out
what the speaker means (over and above what she says and means). Searle's solution is that the
hearer can figure out what the indirect speech act is meant to be, and he gives several hints as to
how this might happen. For the previous example a condensed process might look like this:
Step 1: A proposal is made by X, and Y responded by means of an illocutionary act (2).
Step 2: X assumes that Y is cooperating in the conversation, being sincere, and that she has made a
statement that is relevant.
Step 3: The literal meaning of (2) is not relevant to the conversation.
Step 4: Since X assumes that Y is cooperating; there must be another meaning to (2).
Step 5: Based on mutually shared background information, X knows that they cannot leave until Y
is ready. Therefore, Y has rejected X's proposition.
Step 6: X knows that Y has said something in something other than the literal meaning, and the
primary illocutionary act must have been the rejection of X's proposal.
Searle argues that a similar process can be applied to any indirect speech act as a model to find the
primary illocutionary act (178). His proof for this argument is made by means of a series of supposed
"observations" (ibid., 180-182).
Analysis using Searle's theory
In order to generalize this sketch of an indirect request, Searle proposes a program for the analysis of
indirect speech act performances, whatever they are. He makes the following suggestion:

Step 1: Understand the facts of the conversation.


Step 2: Assume cooperation and relevance on behalf of the participants.
Step 3: Establish factual background information pertinent to the conversation.
Step 4: Make assumptions about the conversation based on steps 13.
Step 5: If steps 14 do not yield a consequential meaning, then infer that there are two
illocutionary forces at work.
Step 6: Assume the hearer has the ability to perform the act the speaker suggests. The act that the
speaker is asking be performed must be something that would make sense for one to ask. For
example, the hearer might have the ability to pass the salt when asked to do so by a speaker who is
at the same table, but not have the ability to pass the salt to a speaker who is asking the hearer to
pass the salt during a telephone conversation.
Step 7: Make inferences from steps 16 regarding possible primary illocutions.
Step 8: Use background information to establish the primary illocution (Searle 184).

Page | 139

With this process, Searle concludes that he has found a method that will satisfactorily reconstruct
what happens when an indirect speech act is performed.
History
For much of the history of linguistics and the philosophy of language, language was viewed primarily
as a way of making factual assertions, and the other uses of language tended to be ignored.[citation
needed] The work of J. L. Austin, particularly his How to Do Things with Words, led philosophers to
pay more attention to the non-declarative uses of language. The terminology he introduced,
especially the notions "locutionary act", "illocutionary act", and "perlocutionary act", occupied an
important role in what was then to become the "study of speech acts". All of these three acts, but
especially the "illocutionary act", are nowadays commonly classified as "speech acts".

Austin was by no means the first one to deal with what one could call "speech acts" in a wider sense.
Earlier treatments may be found in the works of some church fathers, and scholastic philosophers, in
the context of sacramental theology, as well as Thomas Reid, and Charles Sanders Peirce.
Adolf Reinach (18831917) has been credited with a fairly comprehensive account of social acts as
performative utterances dating to 1913, long before Austin and Searle. His work had little influence,
however, perhaps due to his death at 33 in the German Army at the onset of war in 1914.
The term "Speech Act" had also been already used by Karl Bhler in his "Die Axiomatik der
Sprachwissenschaften, Kant-Studien 38 (1933), 43, where he discusses a Theorie der
Sprechhandlungen and in his book Sprachtheorie (Jena: Fischer, 1934) where he uses
"Sprechhandlung" and "Theorie der Sprechakte".
Historical critics
Critical theorists in other areas of critical theory use speech act theory as a way of approaching
aspects of their own discourse. It is used mainly in the fields of linguistics and philosophy, meaning
that, in speaking, a person is doing so through a particular set of pre-set conventions. The basics of
the theory centre on the idea that words, when placed together, do not always have a fixed meaning.
Austins work has had many critics; Gorman (1999, p.109) explains that many people have used his
work without fully understanding its criticisms, and Austins main arguments have had only one
notable follow up work, that by Searle in 1969. Speech-act theory is a continuing discourse, still
written about and criticised in hundreds of articles and books. MacKinnon (1973, p.235) states that
the various conceptual systems we have indicated are only intelligible as extensions of an ordinary
language framework, meaning that, as its basis, the theory must first have an already working or
ordinary set of rules that are indisputable and reliable.

In language development
Dore (1975) proposed that children's utterances were realizations of one of nine primitive speech
acts:
1. labelling
2. repeating
3. answering
4. requesting (action)
5. requesting (answer)
6. calling
Page | 140

7. greeting
8. protesting
9. practicing

Sign Language
A sign language (also signed language) is a language which, instead of acoustically conveyed sound
patterns, uses visually transmitted sign patterns (manual communication, body language and lip
patterns) to convey meaningsimultaneously combining hand shapes, orientation and movement of
the hands, arms or body, and facial expressions to fluidly express a speaker's thoughts. Sign
languages commonly develop in deaf communities, which can include interpreters, friends and
families of deaf people as well as people who are deaf or hard of hearing themselves.
Wherever communities of deaf people exist, sign languages develop. In fact, their complex spatial
grammars are markedly different from the grammars of spoken languages. Hundreds of sign
languages are in use around the world and are at the cores of local Deaf cultures. Some sign
languages have obtained some form of legal recognition, while others have no status at all.
In addition to sign languages, various signed codes of spoken languages have been developed, such
as Signed English and Warlpiri Sign Language. These are not to be confused with languages, oral or
signed; a signed code of an oral language is simply a signed mode of the language it carries, just as a
writing system is a written mode. Signed codes of oral languages can be useful for learning oral
languages or for expressing and discussing literal quotations from those languages, but they are
generally too awkward and unwieldy for normal discourse. For example, a teacher and deaf student
of English in the United States might use Signed English to cite examples of English usage, but the
discussion of those examples would be in American Sign Language.
Several culturally well developed sign languages are a medium for stage performances such as signlanguage poetry. Many of the poetic mechanisms available to signing poets are not available to a
speaking poet.
History of sign language
The written history of sign language began in the 17th century in Spain. In 1620, Juan Pablo Bonet
published Reduccin de las letras y arte para ensear a hablar a los mudos (Reduction of letters and
art for teaching mute people to speak) in Madrid. It is considered the first modern treatise of
Phonetics and Logopedia, setting out a method of oral education for the deaf people by means of the
use of manual signs, in form of a manual alphabet to improve the communication of the mute or
deaf people.
From the language of signs of Bonet, Charles-Michel de l'pe published his alphabet in the 18th
century, which has survived basically unchanged until the present time.
In 1755, Abb de l'pe founded the first private school for deaf children in Paris; Laurent Clerc was
arguably its most famous graduate. He went to the United States with Thomas Hopkins Gallaudet to
found the American School for the Deaf in Hartford, Connecticut. Gallaudet's son, Edward Miner
Gallaudet founded a school for the deaf in 1857, which in 1864 became Gallaudet University in
Washington, DC, the only liberal arts university for and of the deaf in the world.
Generally, each spoken language has a sign language counterpart in as much as each linguistic
population will contain Deaf members who will generate a sign language. In much the same way that
geographical or cultural forces will isolate populations and lead to the generation of different and
distinct spoken languages, the same forces operate on signed languages and so they tend to maintain
their identities through time in roughly the same areas of influence as the local spoken languages.
This occurs even though sign languages have no relation to the spoken languages of the lands in
Page | 141

which they arise. There are notable exceptions to this pattern, however, as some geographic regions
sharing a spoken language have multiple, unrelated signed languages. Variations within a 'national'
sign language can usually be correlated to the geographic location of residential schools for the deaf.
International Sign, formerly known as Gestuno, is used mainly at international Deaf events such as
the Deaflympics and meetings of the World Federation of the Deaf. Recent studies claim that while
International Sign is a kind of a pidgin, they conclude that it is more complex than a typical pidgin and
indeed is more like a full signed language.
Linguistics of sign
In linguistic terms, sign languages are as rich and complex as any oral language, despite the common
misconception that they are not "real languages". Professional linguists have studied many sign
languages and found them to have every linguistic component required to be classed as true
languages.
Sign languages are not mime - in other words, signs are conventional, often arbitrary and do not
necessarily have a visual relationship to their referent, much as most spoken language is not
onomatopoeic. While iconicity is more systematic and wide-spread in sign languages than in spoken
ones, the difference is not categorical. Nor are they a visual rendition of an oral language. They have
complex grammars of their own, and can be used to discuss any topic, from the simple and concrete
to the lofty and abstract.
Sign languages, like oral languages, organize elementary, meaningless units (phonemes; once called
cheremes in the case of sign languages) into meaningful semantic units. The elements of a sign are
Handshape (or Handform), Orientation (or Palm Orientation), Location (or Place of Articulation),
Movement, and Non-manual markers (or Facial Expression), summarised in the acronym HOLME.
Common linguistic features of deaf sign languages are extensive use of classifiers, a high degree of
inflection, and a topic-comment syntax. Many unique linguistic features emerge from sign languages'
ability to produce meaning in different parts of the visual field simultaneously. For example, the
recipient of a signed message can read meanings carried by the hands, the facial expression and the
body posture in the same moment. This is in contrast to oral languages, where the sounds that
comprise words are mostly sequential (tone being an exception).
Sign languages' relationships with oral languages
A common misconception is that sign languages are somehow dependent on oral languages, that is,
that they are oral language spelled out in gesture, or that they were invented by hearing people.
Hearing teachers in deaf schools, such as Thomas Hopkins Gallaudet, are often incorrectly referred to
as inventors of sign language.
Manual alphabets (fingerspelling) are used in sign languages, mostly for proper names and technical
or specialised vocabulary borrowed from spoken languages. The use of fingerspelling was once taken
as evidence that sign languages were simplified versions of oral languages, but in fact it is merely one
tool among many. Fingerspelling can sometimes be a source of new signs, which are called lexicalized
signs.
On the whole, deaf sign languages are independent of oral languages and follow their own paths of
development. For example, British Sign Language and American Sign Language are quite different
and mutually unintelligible, even though the hearing people of Britain and America share the same
oral language.
Similarly, countries which use a single oral language throughout may have two or more sign
languages; whereas an area that contains more than one oral language might use only one sign
language. South Africa, which has 11 official oral languages and a similar number of other widely
used oral languages is a good example of this. It has only one sign language with two variants due to
Page | 142

its history of having two major educational institutions for the deaf which have served different
geographic areas of the country.
Spatial grammar and simultaneity
Sign languages exploit the unique features of the visual medium (sight). Oral language is linear. Only
one sound can be made or received at a time. Sign language, on the other hand, is visual; hence a
whole scene can be taken in at once. Information can be loaded into several channels and expressed
simultaneously. As an illustration, in English one could utter the phrase, "I drove here". To add
information about the drive, one would have to make a longer phrase or even add a second, such as,
"I drove here along a winding road," or "I drove here. It was a nice drive." However, in American Sign
Language, information about the shape of the road or the pleasing nature of the drive can be
conveyed simultaneously with the verb 'drive' by inflecting the motion of the hand, or by taking
advantage of non-manual signals such as body posture and facial expression, at the same time that
the verb 'drive' is being signed. Therefore, whereas in English the phrase "I drove here and it was
very pleasant" is longer than "I drove here," in American Sign Language the two may be the same
length.
In fact, in terms of syntax, ASL shares more with spoken Japanese than it does with English.
Use of signs in hearing communities
Gesture is a typical component of spoken languages. More elaborate systems of manual
communication have developed in places or situations where speech is not practical or permitted,
such as cloistered religious communities, scuba diving, television recording studios, loud workplaces,
stock exchanges, baseball, hunting (by groups such as the Kalahari bushmen), or in the game
Charades. In Rugby Union the Referee uses a limited but defined set of signs to communicate his/her
decisions to the spectators. Recently, there has been a movement to teach and encourage the use of
sign language with toddlers before they learn to talk, because such young children can communicate
effectively with signed languages well before they are physically capable of speech. This is typically
referred to as Baby Sign. There is also movement to use signed languages more with non-deaf and
non-hard-of-hearing children with other causes of speech impairment or delay, for the obvious
benefit of effective communication without dependence on speech.
On occasion, where the prevalence of deaf people is high enough, a deaf sign language has been
taken up by an entire local community. Famous examples of this include Martha's Vineyard Sign
Language in the USA, Kata Kolok in a village in Bali, Adamorobe Sign Language in Ghana and Yucatec
Maya sign language in Mexico. In such communities deaf people are not socially disadvantaged.
Many Australian Aboriginal sign languages arose in a context of extensive speech taboos, such as
during mourning and initiation rites. They are or were especially highly developed among the
Warlpiri, Warumungu, Dieri, Kaytetye, Arrernte, and Warlmanpa, and are based on their respective
spoken languages.
A pidgin sign language arose among tribes of American Indians in the Great Plains region of North
America (see Plains Indian Sign Language). It was used to communicate among tribes with different
spoken languages. There are especially users today among the Crow, Cheyenne, and Arapaho. Unlike
other sign languages developed by hearing people, it shares the spatial grammar of deaf sign
languages.
Telecommunications facilitated signing
Video Interpreter sign used at VRS/VRI service locations
Main articles: Video Remote Interpreting and Video Relay Service
One of the first demonstrations of the ability for telecommunications to help sign language users
communicate with each other occurred when AT&T's videophone (trademarked as the
Page | 143

'Picturephone') was introduced to the public at the 1964 New York World's Fair two deaf users were
able to freely communicate with each other between the fair and another city. Various organizations
have also conducted research on signing via videotelephony.
Sign language interpretation services via Video Remote Interpreting (VRI) or a Video Relay Service
(VRS) are useful in the present-day where one of the parties is deaf, hard-of-hearing or speechimpaired (mute). In such cases the interpretation flow is normally within the same principal
language, such as French Sign Language (FSL) to spoken French, Spanish Sign Language (SSL) to
spoken Spanish, British Sign Language (BSL) to spoken English, and American Sign Language (ASL) also
to spoken English (since BSL and ASL are completely distinct), etc.... Multilingual sign language
interpreters, who can also translate as well across principal languages (such as to and from SSL, to
and from spoken English), are also available, albeit less frequently. Such activities involve
considerable effort on the part of the translator, since sign languages are distinct natural languages
with their own construction and syntax, different from the aural version of the same principal
language.
A deaf or mute person using a Video Relay Service to communicate with a hearing person
With video interpreting, sign language interpreters work remotely with live video and audio feeds, so
that the interpreter can see the deaf or mute party, and converse with the hearing party, and vice
versa. Much like telephone interpreting, video interpreting can be used for situations in which no onsite interpreters are available. However, video interpreting cannot be used for situations in which all
parties are speaking via telephone alone. VRI and VRS interpretation requires all parties to have the
necessary equipment. Some advanced equipment enables interpreters to remotely control the video
camera, in order to zoom in and out or to point the camera toward the party that is signing.
Home sign
Main article: Home sign
Sign systems are sometimes developed within a single family. For instance, when hearing parents
with no sign language skills have a deaf child, an informal system of signs will naturally develop,
unless repressed by the parents. The term for these mini-languages is home sign (sometimes
homesign or kitchen sign).
Home sign arises due to the absence of any other way to communicate. Within the span of a single
lifetime and without the support or feedback of a community, the child naturally invents signals to
facilitate the meeting of his or her communication needs. Although this kind of system is grossly
inadequate for the intellectual development of a child and it comes nowhere near meeting the
standards linguists use to describe a complete language, it is a common occurrence. No type of Home
Sign is recognized as an official language.
Classification of sign languages
Although deaf sign languages have emerged naturally in deaf communities alongside or among
spoken languages, they are unrelated to spoken languages and have different grammatical structures
at their core. A group of sign "languages" known as manually coded languages are more properly
understood as signed modes of spoken languages, and therefore belong to the language families of
their respective spoken languages. There are, for example, several such signed encodings of English.
There has been very little historical linguistic research on sign languages, and few attempts to
determine genetic relationships between sign languages, other than simple comparison of lexical
data and some discussion about whether certain sign languages are dialects of a language or
languages of a family. Languages may be spread through migration, through the establishment of
deaf schools (often by foreign-trained educators), or due to political domination.
Language contact is common, making clear family classifications difficult it is often unclear
whether lexical similarity is due to borrowing or a common parent language. Contact occurs between
Page | 144

sign languages, between signed and spoken languages (Contact Sign), and between sign languages
and gestural systems used by the broader community. One author has speculated that Adamorobe
Sign Language may be related to the "gestural trade jargon used in the markets throughout West
Africa", in vocabulary and areal features including prosody and phonetics.
The only comprehensive classification along these lines going beyond a simple listing of languages
dates back to 1991. The classification is based on the 69 sign languages from the 1988 edition of
Ethnologue that were known at the time of the 1989 conference on sign languages in Montreal and
11 more languages the author added after the conference.
Typology of sign languages
Linguistic typology (going back on Edward Sapir) is based on word structure and distinguishes
morphological classes such as agglutinating/concatenating, inflectional, polysynthetic, incorporating,
and isolating ones.
Sign languages vary in syntactic typology as there are different word orders in different languages.
For example, GS is Subject-Object-Verb while ASL is Subject-Verb-Object. Correspondance to the
surrounding spoken languages is not improbable.
Morphologically speaking, wordshape is the essential factor. Canonical wordshape results from the
systematic pairing of the binary values of two features, namely syllabicity (mono- or poly-) and
morphemicity (mono- or poly-). Brentari classifies sign languages as a whole group determined by
the medium of communication (visual instead of auditive) as one group with the features
monosyllabic and polymorphemic. That means, that via one syllable (i.e. one word, one sign) several
morphemes can be expressed, like subject and object of a verb determine the direction of the verb's
movement (inflection). This is necessary for sign languages to assure a comparible production rate to
spoken languages, since producing one sign takes much longer than uttering one word - but on a
sentence to sentence comparison, signed and spoken languages share approximately the same
speed.
Written forms of sign languages
Sign language differs from oral language in its relation to writing. The phonemic systems of oral
languages are primarily sequential: that is, the majority of phonemes are produced in a sequence
one after another, although many languages also have non-sequential aspects such as tone. As a
consequence, traditional phonemic writing systems are also sequential, with at best diacritics for
non-sequential aspects such as stress and tone.
Sign languages have a higher non-sequential component, with many "phonemes" produced
simultaneously. For example, signs may involve fingers, hands, and face moving simultaneously, or
the two hands moving in different directions. Traditional writing systems are not designed to deal
with this level of complexity.
Partially because of this, sign languages are not often written. In those few countries with good
educational opportunities available to the deaf, many deaf signers can read read and write the oral
language of their country at a level sufficient to consider them as "functionally literate." However, in
many countries, deaf education is very poor and / or very limited. As a consequence, most deaf
people have very little to no literacy in their country's spoken langage.
However, there have been several attempts at developing scripts for sign language. These have
included both "phonetic" systems, such as HamNoSys (the Hamburg Notational System) and
SignWriting, which can be used for any sign language, and "phonemic" systems such as the one used
by William Stokoe in his 1965 Dictionary of American Sign Language, which are designed for a specific
language.
These systems are based on iconic symbols. Some, such as SignWriting and HamNoSys, are
pictographic, being conventionalized pictures of the hands, face, and body; others, such as the
Page | 145

Stokoe notation, are more iconic. Stokoe used letters of the Latin alphabet and Arabic numerals to
indicate the handshapes used in fingerspelling, such as 'A' for a closed fist, 'B' for a flat hand, and '5'
for a spread hand; but non-alphabetic symbols for location and movement, such as '[]' for the trunk
of the body, '' for contact, and '^' for an upward movement. David J. Peterson has attempted to
create a phonetic transcription system for signing that is ASCII-friendly known as the Sign Language
International Phonetic Alphabet (SLIPA).
SignWriting, being pictographic, is able to represent simultaneous elements in a single sign. The
Stokoe notation, on the other hand, is sequential, with a conventionalized order of a symbol for the
location of the sign, then one for the hand shape, and finally one (or more) for the movement. The
orientation of the hand is indicated with an optional diacritic before the hand shape. When two
movements occur simultaneously, they are written one atop the other; when sequential, they are
written one after the other. Neither the Stokoe nor HamNoSys scripts are designed to represent
facial expressions or non-manual movements, both of which SignWriting accommodates easily,
although this is being gradually corrected in HamNoSys.

Social Constructivism
Social constructivism is a sociological theory of knowledge that applies the general philosophical
constructionism into social settings, wherein groups construct knowledge for one another,
collaboratively creating a small culture of shared artifacts with shared meanings. When one is
immersed within a culture of this sort, one is learning all the time about how to be a part of that
culture on many levels. Its origins are largely attributed to Lev Vygotsky.
Social constructivism and social constructionism
Social constructivism is closely related to social constructionism in the sense that people are working

social
constructionism focuses on the artifacts that are created through
the social interactions of a group, while social constructivism
focuses on an individual's learning that takes place because of
their interactions in a group.
together to construct artifacts. However, there is an important difference:

A very simple example is an object like a cup. The object can be used for many things, but its shape
does suggest some 'knowledge' about carrying liquids. A more complex example is an online course not only do the 'shapes' of the software tools indicate certain things about the way online courses
should work, but the activities and texts produced within the group as a whole will help shape how
each person behaves within that group.
For a philosophical account of one possible social constructionist ontology, see the 'Criticism' section
of Representative realism.
Social Constructivism and Education
Social constructivism has been studied by many educational psychologists, who are concerned with
its implications for teaching and learning. Constructivism forms one of the major theories
(behaviourism, social learning, constructivism and social constructivism) of child development,
arising from the work of Jean Piaget's theory of cognitive development. Piaget's stage theory
(describing four successive stages of development) also became known as constructivism, because
he believed children needed to construct an understanding of the world for themselves. This
contrasts with behaviourism (learning theory) in which the development arises from specific forms
Page | 146

of learning, the child being seen as a passive recipient of environmental influences that shape its
behaviour. Piaget's theory saw children as possessing active agency rather than being passive
receptacles. Social constructivism extends constructivism by incorporating the role of other actors
and culture in development. In this sense it can also be contrasted with social learning theory by
stressing interaction over observation.
Vygotsky's contributions reside in Mind in Society and Thought and Language. Vygotsky
independently came to the same conclusions as Piaget regarding the constructive nature of
development.
For more on the psychological dimensions of social constructivism, see the work of A. Sullivan
Palincsar.
An instructional strategy grounded in social constructivism that is an area of active research is
computer-supported collaborative learning (CSCL). This strategy gives students opportunities to
practice 21st-century skills in communication, knowledge sharing, critical thinking and use of relevant
technologies found in the workplace.
Additionally, studies on increasing the use of student discussion in the classroom both support and
are grounded in theories of social constructivism. There are a full range of advantages that result
from the implementation of discussion in the classroom. Participation in group discussion allows
students to generalize and transfer their knowledge of classroom learning and builds a strong
foundation for communicating ideas orally (Reznitskaya, Anderson & Kuo, 2007). Many studies argue
that discussion plays a vital role in increasing student ability to test their ideas, synthesize the ideas
of others, and build deeper understanding of what they are learning (Corden, 2001; Nystrand, 1996;
Reznitskaya, Anderson & Kuo, 2007; Weber, Maher, Powell & Lee, 2008). Large and small group
discussion also affords students opportunities to exercise self-regulation, self-determination, and a
desire to persevere with tasks (Corden, 2001; Matsumara, Slater & Crosson, 2008). Additionally,
discussion increases student motivation, collaborative skills, and the ability to problem solve (Dyson,
2004; Matsumara, Slater & Crosson, 2008; Nystrand, 1996). Increasing students opportunity to talk
with one another and discuss their ideas increases their ability to support their thinking, develop
reasoning skills, and to argue their opinions persuasively and respectfully (Reznitskaya, Anderson &
Kuo, 2007). Furthermore, the feeling of community and collaboration in classrooms increases
through offering more chances for students to talk together (Barab, Dodge, Thomas, Jackson, &
Tuzun, 2007; Hale & City, 2002; Weber, Maher, Powell & Lee, 2008). Given the advantages that result
from discussion, it is surprising that it is not used more often. Studies have found that students are
not regularly accustomed to participating in academic discourse argues that teachers rarely choose
classroom discussion as an instructional format. The results of Nystrands (1996) three year study
focusing on 2400 students in 60 different classrooms indicate that the typical classroom teacher
spends under three minutes an hour allowing students to talk about ideas with one another and the
teacher. Even within those three minutes of discussion, most talk is not true discussion because it
depends upon teacher directed questions with predetermined answers. Multiple observations
indicate that students in low socioeconomic schools and lower track classrooms are allowed even
fewer opportunities for discussion. Teachers who teach as if they value what their students think
create learners. Discussion and interactive discourse promote learning because they afford students
the opportunity to use language as a demonstration of their independent thoughts. Discussion elicits
sustained responses from students that encourage meaning making through negotiating with the
ideas of others. This type of learning promotes retention and in-depth processing associated with
the cognitive manipulation of information (Nystrand, pg. 28).

Syllabus Design
The Place of the Syllabus
Page | 147

A language teaching syllabus involves the integration of subject matter (what to talk about) and
linguistic matter (how to talk about it); that is, the actual matter that makes up teaching. Choices of
syllabi can range from the more or less purely linguistic, where the content of instruction is the
grammatical and lexical forms of the language, to the purely semantic or informational, where the
content of instruction is some skill or information and only incidentally the form of the language. To
design a syllabus is to decide what gets taught and in what order. For this reason, the theory of
language explicitly or implicitly underlying the language teaching method will play a major role in
determining what syllabus is adopted. Theory of learning also plays an important part in determining
the kind of syllabus used. For example, a syllabus based on the theory of learning espoused by
cognitive code teaching would emphasize language forms and whatever explicit descriptive
knowledge about those forms was presently available. A syllabus based on an acquisition theory of
learning, however, would emphasize unanalyzed, though possibly carefully selected experiences of
the new language in an appropriate variety of discourse types.
The choice of a syllabus is a major decision in language teaching, and it should be made as
consciously and with as much information as possible.There has been much confusion over the years
as to what different types of content are possible in language teaching syllabi and as to whether the
differences are in syllabus or method. Several distinct types of language teaching syllabi exist, and
these different types may be implemented in various teaching situations.
Six Types of Syllabi
Although six different types of language teaching syllabi are treated here as though each occurred
"purely," in practice, these types rarely occur independently of each other. Almost all actual language
teaching syllabi are combinations of two or more of the types defined here. For a given course, one
type of syllabus usually dominates, while other types of content may be combined with it.
Furthermore, the six types of syllabi are not entirely distinct from each other. For example, the
distinction between skill-based and task-based syllabi may be minimal. In such cases, the
distinguishing factor is often the way in which the instructional content is used in the actual teaching
procedure. The characteristics, differences, strengths, and weaknesses of individual syllabi are
defined as follows:

1. "A structural (formal) syllabus." The content of language teaching is a collection of the
forms and structures, usually grammatical, of the language being taught. Examples include nouns,
verbs, adjectives, statements, questions, subordinate clauses, and so on.
2. "A notional/functional syllabus." The content of the language teaching is a collection of
the functions that are performed when language is used, or of the notions that language is used to
express. Examples of functions include: informing, agreeing, apologizing, requesting; examples of
notions include size, age, color, comparison, time, and so on.
3. "A situational syllabus." The content of language teaching is a collection of real or
imaginary situations in which language occurs or is used. A situation usually involves several
participants who are engaged in some activity in a specific setting. The language occurring in the
situation involves a number of functions, combined into a plausible segment of discourse. The
primary purpose of a situational language teaching syllabus is to teach the language that occurs in
the situations. Examples of situations include: seeing the dentist, complaining to the landlord, buying
a book at the book store, meeting a new student, and so on.

4. "A skill-based syllabus." The content of the language teaching is a collection of specific
abilities that may play a part in using language. Skills are things that people must be able to do to be
competent in a language, relatively independently of the situation or setting in which the language
use can occur. While situational syllabi group functions together into specific settings of language
use, skill-based syllabi group linguistic competencies (pronunciation, vocabulary, grammar, and
Page | 148

discourse) together into generalized types of behavior, such as listening to spoken language for the
main idea, writing well-formed paragraphs, giving effective oral presentations, and so on. The
primary purpose of skill-based instruction is to learn the specific language skill. A possible secondary
purpose is to develop more general competence in the language, learning only incidentally any
information that may be available while applying the language skills.

5. "A task-based syllabus." The content of the teaching is a series of complex and purposeful
tasks that the students want or need to perform with the language they are learning. The tasks are
defined as activities with a purpose other than language learning, but, as in a content-based syllabus,
the performance of the tasks is approached in a way that is intended to develop second language
ability. Language learning is subordinate to task performance, and language teaching occurs only as
the need arises during the performance of a given task. Tasks integrate language (and other) skills in
specific settings of language use. Task-based teaching differs from situation-based teaching in that
while situational teaching has the goal of teaching the specific language content that occurs in the
situation (a predefined product), task-based teaching has the goal of teaching students to draw on
resources to complete some piece of work (a process). The students draw on a variety of language
forms, functions, and skills, often in an individual and unpredictable way, in completing the tasks.
Tasks that can be used for language learning are, generally, tasks that the learners actually have to
perform in any case. Examples include: applying for a job, talking with a social worker, getting
housing information over the telephone, and so on.

6. "A content-based-syllabus." The primary purpose of instruction is to teach some content


or information using the language that the students are also learning. The students are
simultaneously language students and students of whatever content is being taught. The subject
matter is primary, and language learning occurs incidentally to the content learning. The content
teaching is not organized around the language teaching, but vice-versa. Content-based language
teaching is concerned with information, while task-based language teaching is concerned with
communicative and cognitive processes. An example of content-based language teaching is a
science class taught in the language the students need or want to learn, possibly with linguistic
adjustment to make the science more comprehensible.
In general, the six types of syllabi or instructional content are presented beginning with the one
based most on structure, and ending with the one based most on language use. Language is a
relationship between form and meaning, and most instruction emphasizes one or the other side of
this relationship.
Choosing and Integrating Syllabi
Although the six types of syllabus content are defined here in isolated contexts, it is rare for one type
of syllabus or content to be used exclusively in actual teaching settings. Syllabi or content types are
usually combined in more or less integrated ways, with one type as the organizing basis around
which the others are arranged and related. In discussing syllabus choice and design, it should be kept
in mind that the issue is not which type to choose but which types, and how to relate them to each
other.
Practical Guidelines to Syllabus Choice and Design
It is clear that no single type of content is appropriate for all teaching settings, and the needs and
conditions of each setting are so idiosyncratic that specific recommendations for combination are not
possible. In addition, the process of designing and implementing an actual syllabus warrants a
separate volume. Several books are available that address the process of syllabus design and
implementation both practically and theoretically (see For Further Reading section; the full-length
monograph includes a 13-item annotated bibliography of basic works on syllabus design and a 67item reference list). These books can help language course designers make decisions for their own
programs. However, a set of guidelines for the process is provided below.
Page | 149

Ten steps in preparing a practical language teaching syllabus:


1. Determine, to the extent possible, what outcomes are desired for the students in the instructional
program. That is, as exactly and realistically as possible, define what the students should be able to
do as a result of the instruction.
2. Rank the syllabus types presented here as to their likelihood of leading to the outcomes desired.
Several rankings may be necessary if outcomes are complex.
3. Evaluate available resources in expertise (for teaching, needs analysis, materials choice and
production, etc.), in materials, and in training for teachers.
4. Rank the syllabi relative to available resources. That is, determine what syllabus types would be
the easiest to implement given available resources.
5. Compare the lists made under Nos. 2 and 4. Making as few adjustments to the earlier list as
possible, produce a new ranking based on the resources' constraints.
6. Repeat the process, taking into account the constraints contributed by teacher and student factors
described earlier.
7. Determine a final ranking, taking into account all the information produced by the earlier steps.
8. Designate one or two syllabus types as dominant and one or two as secondary.
9. Review the question of combination or integration of syllabus types and determine how
combinations will be achieved and in what proportion.
10. Translate decisions into actual teaching units.
In making practical decisions about syllabus design, one must take into consideration all the possible
factors that might affect the teachability of a particular syllabus. By starting with an examination of
each syllabus type, tailoring the choice and integration of the different types according to local
needs, one may find a principled and practical solution to the problem of appropriateness and
effectiveness in syllabus design.

Synaptic pruning
In neuroscience, synaptic pruning, neuron pruning or neuro-structural re-assembly is a neurological
regulatory process, which facilitates a productive change in neural structure by reducing the overall
number of overproduced or "weak" neurons into more efficient synaptic configurations. It is often a
synonym used to describe the maturation of behavior and cognitive intelligence in children by
"weeding out" the weaker synapses.
The purpose of synaptic pruning is a simple means of removing un-necessary neuronal structures
from the brain; as the human develops, the need to understand more complex structures becomes
much more pertinent, and simpler associations formed at childhood are thought to be removed for
more complex structures.
Despite the fact it has several connotations with regulation of cognitive childhood development,
pruning is thought to be a process of removing neurons which may have become damaged or
degraded in order to further improve the "networking" capacity of a particular area of the brain.[1]
Furthermore, it has been stipulated that the mechanism not only works in regards to development
and reparation, but also as a means of continually maintaining more efficient brain function by
removing neurons by their synaptic efficiency. During maturation phases in humans
In terms of humans, synaptic pruning has been observed through the inference of differences in the
estimated numbers of glial cells and neurons between children and adults, which differs greatly in
the area of the mediodorsal thalamus.
Page | 150

In a study conducted in 2007 by Oxford University, it was found that by comparing 8 newborn human
brains with those of 8 adult brains using estimates based upon size and gathering from stereological
fractionation, showed that on average, adult neuron estimates were 41% lower than those of the
newborn.
However, in terms of glial cells, adults had far larger estimates than those in newborns; 36.3 million
on average in adult brains, compared to 10.6 million in the newborn samples.[3][4] In terms of the
development of the brain, the structure is thought to change due to the structural changes, in which
degeneration and deafferentation occur in postnatal situations, although in terms of some studies,
these phenomena have not been observed. In the case of development, neurons which are in the
process of loss via programmed cell death are unlikely to be re-used, but rather replaced by new
neuronal structures or synaptic structures, and have been found to occur alongside the structural
change in the sub-cortical gray matter.

Page | 151

Task-Based Language Learning


Task-based language learning (TBLL), also known as task-based language teaching (TBLT) or taskbased instruction (TBI) focuses on the use of authentic language and on asking students to do
meaningful tasks using the target language. Such tasks can include visiting a doctor, conducting an
interview, or calling customer service for help. Assessment is primarily based on task outcome (in
other words the appropriate completion of tasks) rather than on accuracy of language forms. This
makes TBLL especially popular for developing target language fluency and student confidence.
TBLL was popularized by N. Prabhu while working in Bangalore, India. Prabhu noticed that his
students could learn language just as easily with a non-linguistic problem as when they were
concentrating on linguistic questions.
According to Jane Willis, TBLL consists of the pre-task, the task cycle, and the language focus.
In practice
The core of the lesson is, as the name suggests, the task. All parts of the language used are
deemphasized during the activity itself, in order to get students to focus on the task. Although there
may be several effective frameworks for creating a task-based learning lesson, here is a rather
comprehensive one suggested by Jane Willis: Note that each lesson may be broken into several
stages with some stages removed or others added as the instructor sees fit:
Pre-task
In the pre-task, the teacher will present what will be expected of the students in the task phase.
Additionally, the teacher may prime the students with key vocabulary or grammatical constructs,
although, in "pure" task-based learning lessons, these will be presented as suggestions and the
students would be encouraged to use what they are comfortable with in order to complete the task.
The instructor may also present a model of the task by either doing it themselves or by presenting
picture, audio, or video demonstrating the task.
Task
During the task phase, the students perform the task, typically in small groups, although this is
dependent on the type of activity. And unless the teacher plays a particular role in the task, then the
teacher's role is typically limited to one of an observer or counselorthus the reason for it being a
more student-centered methodology.
Planning
Having completed the task, the students prepare either a written or oral report to present to the
class. The instructor takes questions and otherwise simply monitors the students.
Report
The students then present this information to the rest of the class. Here the teacher may provide
written or oral feedback, as appropriate, and the students observing may do the same.
Analysis
Here the focus returns to the teacher who reviews what happened in the task, in regards to
language. It may include language forms that the students were using, problems that students had,
and perhaps forms that need to be covered more or were not used enough.
Practice
The practice stage may be used to cover material mentioned by the teacher in the analysis stage. It is
an opportunity for the teacher to emphasize key language.
Advantages
Page | 152

Task-based learning is advantageous to the student because it is more student-centered, allows for
more meaningful communication, and often provides for practical extra-linguistic skill building.
Although the teacher may present language in the pre-task, the students are ultimately free to use
what grammar constructs and vocabulary they want. This allows them to use all the language they
know and are learning, rather than just the 'target language' of the lesson. Furthermore, as the tasks
are likely to be familiar to the students (e.g. visiting the doctor), students are more likely to be
engaged, which may further motivate them in their language learning.
Disadvantages
There have been criticisms that task-based learning is not appropriate as the foundation of a class for
beginning students. Others claim that students are only exposed to certain forms of language, and
are being neglected of others, such as discussion or debate. Teachers may want to keep these in
mind when designing a task-based learning lesson plan.
Related approaches to language teaching
Dogme language teaching shares a philosophy with TBL, although differs in approach. Dogme is a
communicative approach to language teaching and encourages teaching without published
textbooks and instead focusing on conversational communication among the learners and the
teacher.

Theory of Mind
Theory of mind is the ability to attribute mental statesbeliefs, intents, desires, pretending,
knowledge, etc.to oneself and others and to understand that others have beliefs, desires and
intentions that are different from one's own. Though there are philosophical approaches to issues
raised in such discussions, theory of mind as such is distinct from the philosophy of mind.

Defining Theory of Mind


Theory of Mind is a theory insofar as the mind is not directly observable. The presumption that
others have a mind is termed a theory of mind because each human can only prove the existence of
his or her own mind through introspection, and no one has direct access to the mind of another. It is
typically assumed that others have minds by analogy with one's own, and based on the reciprocal
nature of social interaction, as observed in joint attention, the functional use of language, and
understanding of others' emotions and actions. Having a theory of mind allows one to attribute
thoughts, desires, and intentions to others, to predict or explain their actions, and to posit their
intentions. As originally defined, it enables one to understand that mental states can be the cause
ofand thus be used to explain and predictothers behavior. Being able to attribute mental
states to others and understanding them as causes of behavior implies, in part, that one must be
able to conceive of the mind as a generator of representations. If a person does not have a
complete theory of mind it may be a sign of cognitive or developmental impairment.
Theory of mind appears to be an innate potential ability in humans, but one requiring social and
other experience over many years to bring to fruition. Different people may develop more, or less,
effective theories of mind. Empathy is a related concept, meaning experientially recognizing and
understanding the states of mind, including beliefs, desires and particularly emotions of others, often
characterized as the ability to "put oneself into another's shoes." Theorizing in the neo-Piagetian
theories of cognitive development maintains that theory of mind is a byproduct of a broader
hypercognitive ability of the human mind to register, monitor, and represent its own functioning.
Research on theory of mind in a number of different populations (human and animal, adults and
children, normally- and atypically-developing) has grown rapidly in the almost 30 years since
Page | 153

Premack and Woodruff's paper, "Does the chimpanzee have a theory of mind?", as have the theories
of theory of mind. The emerging field of social neuroscience has also begun to address this debate,
by imaging humans while performing tasks demanding the understanding of an intention, belief or
other mental state.
An alternative account of ToM is given within operant psychology and provides significant empirical
evidence for a functional account of both perspective taking and empathy. The most developed
operant approach is founded on research on derived relational responding and is subsumed within
what is called, "Relational Frame Theory." According to this view empathy and perspective taking
comprise a complex set of derived relational abilities based on learning to discriminate and verbally
respond to ever more complex relations between self, others, place, and time, and the
transformation of function through established relations.
Philosophical roots
Contemporary discussions of ToM have their roots in philosophical debatemost broadly, from the
time of Descartes "Second Meditation," which set the groundwork for considering the science of the
mind. Most prominent recently are two contrasting approaches, in the philosophical literature, to
theory of mind: theory-theory and simulation theory. The theory-theorist imagines a veritable
theory"folk psychology"used to reason about others' minds. The theory is developed
automatically and innately, though instantiated through social interactions.
On the other hand, simulation theory suggests ToM is not, at its core, theoretical. Two kinds of
simulationism have been proposed. One version (Alvin Goldman's) emphasizes that one must
recognize one's own mental states before ascribing mental states to others by simulation. The
second version of simulation theory proposes that each person comes to know his or her own and
others' minds through what Robert Gordon names a logical "ascent routine" which answers
questions about mental states by re-phrasing the question as a metaphysical one. For example, if Zoe
asks Pam, "Do you think that dog wants to play with you?", Pam would ask herself, "Does that dog
want to play with me?" to determine her own response. She could equally well ask that to answer
the question of what Zoe might think. Both hold that people generally understand one another by
simulating being in the other's shoes.
One of the differences between the two theories that have influenced psychological consideration of
ToM is that theory-theory describes ToM as a detached theoretical process that is an innate feature,
whereas simulation theory portrays ToM as a kind of knowledge that allows one to form predictions
of someone's mental states by putting oneself in the other person's shoes and simulating them.
These theories continue to inform the definitions of theory of mind at the heart of scientific ToM
investigation.
The philosophical roots of the Relational Frame Theory account of ToM arises from contextual
psychology and refers to the study of organisms (both human and non-human) interacting in and
with a historical and current situational context. It is an approach based on contextualism, a
philosophy in which any event is interpreted as an ongoing act inseparable from its current and
historical context and in which a radically functional approach to truth and meaning is adopted. As a
variant of contextualism, RFT focuses on the construction of practical, scientific knowledge. This
scientific form of contextual psychology is virtually synonymous with the philosophy of operant
psychology.
Theory of mind development
The study of which animals are capable of attributing knowledge and mental states to others, as well
as when in human ontogeny and phylogeny this ability developed, has identified a number of
precursory behaviors to a theory of mind. Understanding attention, understanding of others'
intentions and imitative experience with other people are hallmarks of a theory of mind which may
be observed early in the development of what will later become a full-fledged theory. In studies with
Page | 154

non-human animals and pre-verbal humans, in particular, researchers look to these behaviors
preferentially in making inferences about mind.
Baron-Cohen identified the infant's understanding of attention in others, a social skill found by 7 to 9
months of age, as a "critical precursor" to the development of theory of mind. Understanding
attention involves understanding that seeing can be directed selectively as attention, that the looker
assesses the seen object as "of interest," and that seeing can induce beliefs. Attention can be
directed and shared by the act of pointing, a joint attention behavior which requires taking into
account another person's mental state, particularly whether the person notices an object or finds it
of interest. Baron-Cohen speculates that the inclination to spontaneously reference an object in the
world as of interest ("proto-declarative pointing") and to likewise appreciate the directed attention
and interests of another may be the underlying motive behind all human communication.
Understanding of others' intentions is another critical precursor to understanding other minds
because intentionality, or "aboutness", is a fundamental feature of mental states and events. The
"intentional stance" has been defined by Dennett as an understanding that others' actions are goaldirected and arise from particular beliefs or desires. Both 2- and 3-year-old children could
discriminate when an experimenter intentionally vs. accidentally marked a box as baited with
stickers. Even earlier in ontogeny, Meltzoff found that 18 month-old infants could perform target
manipulations that adult experimenters attempted and failed, suggesting the infants could represent
the object-manipulating behavior of adults as involving goals and intentions . While attribution of
intention (the box-marking) and knowledge (false-belief tasks) is investigated in young humans and
nonhuman animals to detect precursors to a theory of mind, Gagliardi et al. have pointed out that
even adult humans do not always act in a way consistent with an attributional perspective . In the
experiment, adult human subjects came to make choices about baited containers when guided by
confederates who could not see (and therefore, not know) which container had been baited.
Recent research in developmental psychology suggests that the infant's ability to imitate others lies
at the origins of both a theory of mind and other social-cognitive achievements like perspectivetaking and empathy. According to Meltzoff, the infant's innate understanding that others are "like
me" allows it to recognize the equivalence between the physical and mental states apparent in
others and those felt by the self. For example, the infant uses his own experiences orienting his
head/eyes toward an object of interest to understand the movements of others who turn toward an
object, that is, that they will generally attend to objects of interest or significance. Some researchers
in comparative disciplines have hesitated to put a too-ponderous weight on imitation as a critical
precursor to advanced human social-cognitive skills like mentalizing and empathizing, especially if
true imitation is no longer employed by adults. A test of imitation by Horowitz found that adult
subjects imitated an experimenter demonstrating a novel task far less closely than children subjects
did. Horowitz points out that the precise psychological state underlying imitation is unclear and
cannot, by itself, be used to draw conclusions about the mental states of humans.
Empirical investigation
Whether children younger than 3 or 4 years old may have a theory of mind is a topic of debate
among researchers. It is a challenging question, due to the difficulty of assessing what pre-linguistic
children understand about others and the world. Tasks used in research into the development of
ToM must take into account the umwelt(the German word Umwelt means "environment" or
"surrounding world")of the pre-verbal child.
False-belief task
One of the most important milestones in theory of mind development is gaining the ability to
attribute false belief: that is, to recognize that others can have beliefs about the world that are
wrong. To do this, it is suggested, one must understand how knowledge is formed, that peoples
beliefs are based on their knowledge, that mental states can differ from reality, and that peoples
Page | 155

behavior can be predicted by their mental states. Numerous versions of the false-belief task have
been developed, based on the initial task done by Wimmer and Perner (1983).
In the most common version of the false-belief task (often called the Sally-Anne task), children are
told or shown a story involving two characters. For example, the child is shown two dolls, Sally and
Anne, who have a basket and a box, respectively. Sally also has a marble, which she places in her
basket, and then leaves to take a walk. While she is out of the room, Anne takes the marble from the
basket, eventually putting it in the box. Sally returns, and the child is then asked where Sally will look
for the marble. The child passes the task if she answers that Sally will look in the basket, where she
put the marble; the child fails the task if she answers that Sally will look in the box, where the child
knows the marble is hidden, even though Sally cannot know, since she did not see it hidden there. In
order to pass the task, the child must be able to understand that anothers mental representation of
the situation is different from their own, and the child must be able to predict behavior based on that
understanding. The results of research using false-belief tasks have been fairly consistent: most
normally-developing children are unable to pass the tasks until around age four. (Notably, while most
children, including those with Down's syndrome, are able to pass this "test", in one study, 80% of
children diagnosed with autism were unable to do so.)
Appearance-reality task
Other tasks have been developed to try to solve the problems inherent in the false-belief task. In the
"appearance-reality", or "Smarties" task, experimenters ask children what they believe to be the
contents of a box that looks as though it holds a candy called "Smarties." After the child guesses
(usually) "Smarties," each is shown that the box in fact contained pencils. The experimenter then recloses the box and asks the child what she thinks another person, who has not been shown the true
contents of the box, will think is inside. The child passes the task if she responds that another person
will think that there are "Smarties" in the box, but fails the task if she responds that another person
will think that the box contains pencils. Gopnik & Astington (1988) found that children pass this test
at age four or five years.
Other tasks
The "false-photograph" task
is another task that serves as a measure of theory of mind
development. In this task, children must reason about what is represented in a photograph that
differs from the current state of affairs. Within the false-photograph task, there is either a location or
identity change. In the location-change task, the child is told a story about a character that puts an
object in one location (e.g., chocolate in a green cupboard) and takes a Polaroid photograph of the
scene. While the photograph is developing, the object is moved to a different location (e.g., to a blue
cupboard). The child is then asked two control questions, When we first took the picture, where was
the object? Where is the object now? The subject is also asked a false-photograph question, Where
is the object in the picture? The child passes the task if she correctly identifies the location of the
object in the picture and the actual location of the object at the time of the question.
In order to make tasks more accessible for young children, non-human animals, and autistic
individuals, theory of mind research has begun employing non-verbal paradigms. One category of
tasks uses a preferential looking paradigm, with looking time as the dependent variable. For instance,
Woodward found that 9-month-old infants preferred looking at behaviors performed by a human
hand over those made by an inanimate hand-like object. Other paradigms look at rates of imitative
behavior, the ability to replicate and complete unfinished goal-directed acts , and observations of
rates of pretend play
Autism
The theory of mind (ToM) impairment describes a difficulty someone would have with perspective
taking. This is also sometimes referred to as mind-blindness. This means that individuals with a
ToM impairment would have a hard time seeing things from any other perspective than their own.
Page | 156

Individuals who experience a theory of mind deficit have difficulty determining the intentions of
others, lack understanding of how their behavior affects others, and have a difficult time with
social reciprocity. In 1985 Simon Baron-Cohen, Alan M. Leslie and Uta Frith published research
which suggested that children with autism do not employ a theory of mind, and suggested that
children with autism have particular difficulties with tasks requiring the child to understand another
person's beliefs. These difficulties persist when children are matched for verbal skills (Happe, 1995,
Child Development) and have been taken as a key feature of autism.
Many individuals classified as having autism have severe difficulty assigning mental states to
others, and they seem to lack theory of mind capabilities. Researchers who study the relationship
between autism and theory of mind attempt to explain the connection in a variety of ways. One
account assumes that theory of mind plays a role in the attribution of mental states to others and
in childhood pretend play. According to Leslie, theory of mind is the capacity to mentally
represent thoughts, beliefs, and desires, regardless of whether or not the circumstances involved
are real. This might explain why individuals with autism show extreme deficits in both theory of mind
and pretend play. However, Hobson proposes a social-affective justification, which suggests that a
person with autism deficits in theory of mind result from a distortion in understanding and
responding to emotions. He suggests that typically developing human beings, unlike individuals with
autism, are born with a set of skills (such as social referencing ability) which will later enable them to
comprehend and react to other peoples feelings. Other scholars emphasize that autism involves a
specific developmental delay, so that children with the impairment vary in their deficiencies, because
they experience difficulty in different stages of growth. Very early setbacks can alter proper
advancement of joint-attention behaviors, which may lead to a failure to form a full theory of mind.
It has been speculated that ToM exists on a continuum as opposed to the traditional view of a
concrete presence or absence. While some research has suggested that some autistic populations
are unable to attribute mental states to others, recent evidence points to the possibility of coping
mechanisms that facilitate a spectrum of mindful behavior . In addition to autism, ToM deficits have
also been observed in schizophrenics.
Brain mechanisms
In normally developing humans
Research on theory of mind in autism led to the view that mentalizing abilities are subserved by
dedicated mechanisms that can (in some cases) be impaired while general cognitive function remains
largely intact. Neuroimaging research has supported this view, demonstrating specific brain regions
consistently engaged during theory of mind tasks. Early PET research on theory of mind, using verbal
and pictorial story comprehension tasks, identified a set of regions including the medial prefrontal
cortex (mPFC), and area around posterior superior temporal sulcus (pSTS), and sometimes precuneus
and amygdala/temporopolar cortex (reviewed in ). Subsequently, research on the neural basis of
theory of mind has diversified, with separate lines of research focused on the understanding of
beliefs, intentions, and more complex properties of minds such as psychological traits.
Studies from Rebecca Saxe's lab at MIT, using a false belief versus false photograph task contrast
aimed to isolate the mentalizing component of the false belief task, have very consistently found
activation in mPFC, precuneus, and temporo-parietal junction (TPJ), right-lateralized. In particular, it
has been proposed that the right TPJ (rTPJ) is selectively involved in representing the beliefs of
others.[44] However, this hypothesis remains controversial, because the same rTPJ region has been
consistently activated during spatial reorienting of visual attention[45][46]; Jean Decety from the
University of Chicago and Jason Mitchell from Harvard have thus proposed that the rTPJ subserves a
more general function involved in both false belief understanding and attentional reorienting, rather
than a mechanism specialized for social cognition.
Functional imaging has also been used to study the detection of mental state information in HeiderSimmel-esque animations of moving geometric shapes, which typical humans automatically perceive
Page | 157

as social interactions laden with intention and emotion. Three studies found remarkably similar
patterns of activation during the perception of such animations versus a random or deterministic
motion control: mPFC, pSTS, fusiform face area (FFA), and amygdala were selectively engaged during
the ToM condition. Another study presented subjects with an animation of two dots moving with a
parameterized degree of intentionality (quantifying the extent to which the dots chased each other),
and found that pSTS activation correlated with this parameter.
A separate body of research has implicated the posterior superior temporal sulcus in the perception
of intentionality in human action; this area is also involved in perceiving biological motion, including
body, eye, mouth, and point-light display motion (reviewed in. One study found increased pSTS
activation while watching a human lift his hand versus having his hand pushed up by a piston
(intentional versus unintentional action). Several studies have found increased pSTS activation when
subjects perceive a human action that is incongruent with the action expected from the actors
context and inferred intention: for instance, a human performing a reach-to-grasp motion on empty
space next to an object, versus grasping the object[53]; a human shifting eye gaze toward empty
space next to a checkerboard target versus shifting gaze toward the target[54]; a human turning on a
light with his knee, versus turning on a light with his knee while carrying a pile of books; and a
walking human pausing as he passes behind a bookshelf, versus walking at a constant speed.[56] In
these studies, actions in the "congruent" case have a straightforward goal, and are easy to explain in
terms of the actors intention; the incongruent actions, on the other hand, require further
explanation (why would someone twist empty space next to a gear?), and apparently demand more
processing in the STS. Note that this region is distinct from the temporo-parietal area activated
during false belief tasks. Also note that pSTS activation in most of the above studies was largely rightlateralized, following the general trend in neuroimaging studies of social cognition and perception:
also right-lateralized are the TPJ activation during false belief tasks, the STS response to biological
motion, and the FFA response to faces.
Neuropsychological evidence has provided support for neuroimaging results on the neural basis of
theory of mind. A study with patients suffering from a lesion of the temporoparietal junction of the
brain (between the temporal lobe and parietal lobe) reported that they have difficulty with some
theory of mind tasks. This shows that theory of mind abilities are associated with specific parts of the
human brain. However, the fact that the medial prefrontal cortex and temporoparietal junction are
necessary for theory of mind tasks does not imply that these regions are specific to that function. TPJ
and mPFC may subserve more general functions necessary for ToM.
Research by Vittorio Gallese, Luciano Fadiga and Giacomo Rizzolatti (reviewed in) has shown that
some sensorimotor neurons, which are referred to as mirror neurons, first discovered in the
premotor cortex of rhesus monkeys, may be involved in action understanding. Single-electrode
recording revealed that these neurons fired when a monkey performed an action and when the
monkey viewed another agent carrying out the same task. Similarly, fMRI studies with human
participants have shown brain regions (assumed to contain mirror neurons) are active when one
person sees another person's goal-directed action. These data have led some authors to suggest that
mirror neurons may provide the basis for theory of mind in the brain, and to support simulation
theory of mind reading (see above).
However, there is also evidence against the link between mirror neurons and theory of mind. First,
macaque monkeys have mirror neurons but do not seem to have a 'human-like' capacity to
understand theory of mind and belief. Second, fMRI studies of theory of mind typically report
activation in the mPFC, temporal poles and TPJ or STS, but these brain areas are not part of the
mirror neuron system. Some investigators, like developmental psychologist Andrew Meltzoff and
neuroscientist Jean Decety, believe that mirror neurons merely facilitate learning through imitation
and may provide a precursor to the development of ToM.[63][64]
In autism
Page | 158

Several neuroimaging studies have looked at the neural basis theory of mind impairment in subjects
with Asperger syndrome and high-functioning autism (HFA). The first PET study of theory of mind in
autism (also the first neuroimaging study using a task-induced activation paradigm in autism)
employed a story comprehension task,[65], replicating a prior study in normal individuals.[66] This
study found displaced and diminished mPFC activation in subjects with autism. However, because
the study used only six subjects with autism, and because the spatial resolution of PET imaging is
relatively poor, these results should be considered preliminary.
A subsequent fMRI study scanned normally developing adults and adults with HFA while performing
a "reading the mind in the eyes" taskviewing a photo of a humans eyes and choosing which of two
adjectives better describes the persons mental state, versus a gender discrimination control.[67] The
authors found activity in orbitofrontal cortex, STS, and amygdala in normal subjects, and found no
amygdala activation and abnormal STS activation in subjects with autism.
A more recent PET study looked brain activity in individuals with HFA and Asperger syndrome while
viewing Heider-Simmel animations (see above) versus a random motion control.[68] In contrast to
normally developing subjects, those with autism showed no STS or FFA activation, and significantly
less mPFC and amygdala activation. Activity in extrastriate regions V3 and LO was identical across the
two groups, suggesting intact lower-level visual processing in the subjects with autism. The study also
reported reduced significantly less functional connectivity between STS and V3 in the autism group.
Note, however, that decreased temporal correlation between activity in STS and V3 would be
expected simply from the lack of an evoked response in STS to intent-laden animations in subjects
with autism; a more informative analysis would be to compute functional connectivity after
regressing out evoked responses from all time series.
A subsequent study, using the incongruent/congruent gaze shift paradigm described above, found
that in high-functioning adults with autism, STS activation was undifferentiated while watching a
human shift gaze toward a target and toward adjacent empty space.[69] The lack of additional STS
processing in the incongruent state may suggest that these subjects fail to form an expectation of
what the actor should do given contextual information, or that information about the violation of
this expectation doesnt reach STS; both explanations involve an impairment in the ability to link eye
gaze shifts with intentional explanations. This study also found a significant anticorrelation between
STS activation in the incongruent-congruent contrast and social subscale score on the Autism
Diagnostic Interview-Revised, but not scores on the other subscales.
Non-human theory of mind
As the title of Premack and Woodruff's 1978 article "Does the chimpanzee have a theory of mind?"
indicates, it is also important to ask if other animals besides humans have a genetic endowment and
social environment that allows them to acquire a theory of mind in the same way that human
children do. This is a contentious issue because of the problem of inferring from animal behavior the
existence of thinking, of the existence of a concept of self or self-awareness, or of particular
thoughts. One difficulty with non-human studies of ToM is the lack of sufficient numbers of
naturalistic observation, giving insight into what the evolutionary pressures might be on a species'
development of theory of mind.
Non-human research still has a major place in this field, however, and is especially useful in
illuminating which nonverbal behaviors signify components of theory of mind, and in pointing to
possible stepping points in the evolution of what many claim to be a uniquely human aspect of social
cognition. While it is difficult to study human-like theory of mind and mental states in species which
we do not yet describe as "minded" at all, and about whose potential mental states we have an
incomplete understanding, researchers can focus on simpler components of more complex
capabilities. For example, many researchers focus on animals' understanding of intention, gaze,
perspective, or knowledge (or rather, what another being has seen). Call and Tomasello's study that
looked at understanding of intention in orangutans, chimpanzees and children showed that all three
Page | 159

species understood the difference between accidental and intentional acts. Part of the difficulty in
this line of research is that observed phenomena can often be explained as simple stimulus-response
learning, as it is in the nature of any theorizers of mind to have to extrapolate internal mental states
from observable behavior. Recently, most non-human theory of mind research has focused on
monkeys and great apes, who are of most interest in the study of the evolution of human social
cognition. Other studies relevant to attributions theory of mind have been conducted using plovers
and dogs, and have shown preliminary evidence of understanding attentionone precursor of
theory of mindin others.
There has been some controversy over the interpretation of evidence purporting to show theory of
mind abilityor inabilityin animals. Two examples serve as demonstration: first, Povinelli et al.
presented chimpanzees with the choice of two experimenters from which to request food: one who
had seen where food was hidden, and one who, by virtue of one of a variety of mechanisms (having a
bucket or bag over his head; a blindfold over his eyes; or being turned away from the baiting) does
not know, and can only guess. They found that the animals failed in most cases to differentially
request food from the "knower." By contrast, Hare, Call, and Tomasello found that subordinate
chimpanzees were able to use the knowledge state of dominant rival chimpanzees to determine
which container of hidden food they approached.

Page | 160

Universal grammar
Universal grammar (UG) is a theory of linguistics postulating principles of grammar shared by all
languages, thought to be innate to humans (linguistic nativism). It attempts to explain language
acquisition in general, not describe specific languages. Universal grammar proposes a set of rules
intended to explain language acquisition in child development.
Some students of universal grammar study a variety of grammars to abstract generalizations called
linguistic universals, often in the form of "If X holds true, then Y occurs." These have been extended
to a range of traits, from the phonemes found in languages, to what word orders languages choose,
to why children exhibit certain linguistic behaviors.
The idea can be traced to Roger Bacon's observation that all languages are built upon a common
grammar, substantially the same in all languages, even though it may undergo in them accidental
variations, and the 13th century speculative grammarians who, following Bacon, postulated universal
rules underlying all grammars. The concept of a universal grammar or language was at the core of
the 17th century projects for philosophical languages. The 18th century in Scotland saw the
emergence of a vigorous universal grammar school. Later linguists who have influenced this theory
include Noam Chomsky, Edward Sapir and Richard Montague, developing their version of the theory
as they considered issues of the Argument from poverty of the stimulus to arise from the
constructivist approach to linguistic theory. The application of the idea to the area of second
language acquisition (SLA) is represented mainly by the McGill linguist Lydia White.
Universal Grammar, as hypothesized by Chomsky, has long been controversial due to its strong
innatist assumptions and lack of empirical basis. Most syntacticians generally concede that there are
parametric points of variation between languages, although heated debate occurs over whether UG
constraints are essentially universal due to being "hard-wired" (Chomsky's Principles and Parameters
approach), a logical consequence of a specific syntactic architecture (the Generalized Phrase
Structure approach) or the result of functional constraints on communication (the functionalist
approach).
History
The idea can be traced to Roger Bacon's observation that all languages are built upon a common
grammar, substantially the same in all languages, even though it may undergo accidental variations,
and the 13th century speculative grammarians who, following Bacon, postulated universal rules
underlying all grammars. The concept of a universal grammar or language was at the core of the 17th
century projects for philosophical languages. There is a Scottish school of universal grammarians
from the 18th century, to be distinguished from the philosophical language project, and including
authors such as James Beattie, Hugh Blair, James Burnett, James Harris, and Adam Smith. The article
on "Grammar" in the first edition of the Encyclopedia Britannica (1771) contains an extensive section
titled "Of Universal Grammar."
The idea rose to notability in modern linguistics with theorists such as Noam Chomsky and Richard
Montague, developed in the 1950s to 1970s, as part of the "Linguistics Wars".
Chomsky's theory
Further information: Language acquisition device, Generative grammar, X-bar theory, Government
and Binding, Principles and parameters, and Minimalist Program
Linguist Noam Chomsky made the argument that the human brain contains a limited set of rules for
organizing language. In turn, there is an assumption that all languages have a common structural
basis. This set of rules is known as universal grammar.
Speakers proficient in a language know what expressions are acceptable in their language and what
expressions are unacceptable. The key puzzle is how speakers should come to know the restrictions
of their language, since expressions which violate those restrictions are not present in the input,
Page | 161

indicated as such. This absence of negative evidencethat is, absence of evidence that an expression
is part of a class of the ungrammatical sentences in one's languageis the core of the poverty of
stimulus argument. For example, in English one cannot relate a question word like 'what' to a
predicate within a relative clause (1):
(1) What did John meet a man who sold?
Such expressions are not available to the language learners, because they are, by hypothesis,
ungrammatical for speakers of the local language. Speakers of the local language do not utter such
expressions and note that they are unacceptable to language learners. Universal grammar offers a
solution to the poverty of the stimulus problem by making certain restrictions universal
characteristics of human languages. Language learners are consequently never tempted to generalize
in an illicit fashion.
Evidence and support
Neurological evidence
Recent (2003) evidence suggests part of the human brain (crucially involving Broca's area, a portion
of the left inferior frontal gyrus), is selectively activated by those languages that meet Universal
Grammar requirements.
How Would Universal Grammar Become Hardwired in the Brain?
Vandervert (2009a) proposed that language evolved from highly repetitive visual-spatial processing
in working memory in direct collaboration with the cerebellum. The cerebellum, in conjunction with
the cerebral cortex, functions to make all motor and working memory activities more efficient.
Elaborating on the work of anthropologist Stanley Ambrose (2001), Vandervert described how the
cerebellum, which has seen a fourfold increase in size during the last million years, acted to compose
and decompose (Flanagan et al, 1999) visual-spatial working memory into protocols of linguistic
mental events through co-evolution with progressively complex stone tool making. Through
hundreds of thousands of years of this co-evolution these protocols became hardwired in the brain,
thereby adding the speech loop to working memory (including Broca's area) and establishing a
universal grammar. According to Vandervert, linguistic protocols were selected because they
adaptively refined the perceptual-motor control in the evolution of stone tool technology and the
social sharing of complex tasksthus the tremendous fourfold growth of the cerebellum was coselected with new functions of working memory over the last million years.
Expanding these ideas to the ontogenetic ontogeny level, Vandervert (2009a, 2009b) proposed that
during language acquisition (see section 7.2.1, Evolution of language in working memory) in infancy
the infant starts with visual-spatial working memory and naturally develops the speech loop of
working memory through what Mandler (2004) described as perceptual analysis. The resulting
conceptual primitives simultaneously provide the basis for a universal grammar.
Presence of creole languages
The presence of creole languages is cited as further support for this theory, especially by Bickerton's
controversial language bioprogram theory. These languages were developed and formed when
different societies came together and were forced to devise their own system of communication. The
system used by the original speakers was an inconsistent mix of vocabulary items known as a pidgin.
When these speakers' children were acquiring their first language, they used the pidgin input to
effectively create their own original language, known as a creole. Unlike pidgins, creoles have native
speakers and make use of a full grammar.
The idea of universal grammar is supported by the creole languages by virtue of the fact that certain
features are shared by virtually all of these languages. For example, their default point of reference in
time (expressed by bare verb stems) is not the present moment, but the past. Using pre-verbal
auxiliaries, they uniformly express tense, aspect, and mood. Negative concord occurs, but it affects
Page | 162

the verbal subject (as opposed to the object, as it does in languages like Spanish). Another similarity
among creoles is that questions are created simply by changing a declarative sentence's intonation,
not its word order or content.
Criticism
Since their inception, universal grammar theories have been subjected to vocal and sustained
criticism. In recent years, with the advent of more sophisticated brands of computational modeling
and more innovative approaches to the study of language acquisition, these criticisms have
multiplied.
Geoffrey Sampson maintains that universal grammar theories are not falsifiable and are therefore
pseudoscientific theory. He argues that the grammatical "rules" linguists posit are simply post-hoc
observations about existing languages, rather than predictions about what is possible in a language, a
claim that has been echoed by Henry L. Roediger III in "What Happened to Behaviorism?" Similarly,
Jeffrey Elman, argues that the unlearnability of languages assumed by Universal Grammar is based
on a too-strict, "worst-case" model of grammar, that is not in keeping with any actual grammar. In
keeping with these points, James Hurford argues that the postulate of a language acquisition device
(LAD) essentially amounts to the trivial claim that languages are learnt by humans, and thus, that the
LAD is less a theory than an explanandum looking for theories.
Sampson, Roediger, Elman and Hurford are hardly alone in suggesting that several of the basic
assumptions of Universal Grammar are unfounded. Indeed, a growing number of language
acquisition researchers argue that the very idea of a strict rule-based grammar in any language, flies
in the face of what is known about how languages are spoken and how languages evolve over time.
For instance, Morten Christiansen and Nick Chater have argued that the relatively fast-changing
nature of language would prevent the slower-changing genetic structures from ever catching up,
undermining the possibility of a genetically hard-wired universal grammar. In addition, it has been
suggested, that people learn about probabilistic patterns of word distributions in their language,
rather than hard and fast rules (see the distributional hypothesis). It has also been proposed that the
poverty of the stimulus problem can be largely avoided, if we assume that children employ similaritybased generalization strategies in language learning, generalizing about the usage of new words from
similar words that they already know how to use.
Another way of defusing the poverty of the stimulus argument, is to assume that if language learners
notice the absence of classes of expressions in the input, they will hypothesize a restriction (a
solution closely related to Bayesian reasoning). In a similar vein, Michael Ramscar has suggested that
when children erroneously expect an ungrammatical form that then never occurs, the repeated
failure of expectation serves as a form of implicit negative feedback that allows them to correct their
errors over time. This implies that word learning is a probabilistic, error-driven process, rather than a
process of fast mapping, as many nativists assume.
Finally, in the domain of field research, the Pirah language, is claimed to be a counterexample to the
basic tenants of Universal Grammar. Among other things, this language is alleged to lack all evidence
for recursion, including embedded clauses, as well as quantifiers and color terms. Some other
linguists have argued, however, that some of these properties have been misanalyzed, and that
others are actually expected under current theories of Universal Grammar. While most languages
studied in that respect do indeed seem to share common underlying rules, research is hampered by
considerable sampling bias. Linguistically, most diverse areas such as tropical Africa and America, as
well as the diversity of Indigenous Australian and Papuan languages, have been insufficiently studied.
Furthermore, language extinction apparently has affected those areas most where most examples of
unconventional languages have been found to date.

Page | 163

Page | 164

Vygotsky's influence on Krashen's second language acquisition theory


Although Vygotsky and Krashen come from entirely different backgrounds, the application of their
theories to second language teaching produces similarities.
Influence or coincidence, Krashen's input hypothesis resembles Vygotsky's concept of zone of
proximal development. According to the input hypothesis, language acquisition takes place during
human interaction in an environment of the foreign language when the learner receives language
'input' that is one step beyond his/her current stage of linguistic competence. For example, if a
learner is at a stage 'i', then maximum acquisition takes place when he/she is exposed to
'Comprehensible Input' that belongs to level 'i + 1'.
Krashen's acquisition-learning hypothesis also seems to have been influenced by Vygotsky. Although
Vygotsky speaks of internalization of language while Krashen uses the term language acquisition,
both are based on a common assumption: interaction with other people. The concept of acquisition
as defined by Krashen and its importance in achieving proficiency in foreign languages, can be a
perfect application of Vygotsky's view of cognitive development as taking place in the matrix of the
person's social history and being a result of it.
Even the distinct concepts in Krashen's acquisition theory and Vygotsky's sociocultural theory are not
conflicting but complementary in providing resources for language teaching methodology.
By explaining human language development and cognitive development, Vygotsky's socialinteractionist theory serves as a strong foundation for the modern trends in applied linguistics. It
lends support to less structured and more natural, communicative and experiential approaches and
points to the importance of early real-world human interaction in foreign language learning.

Words and Rules


Words and Rules: The Ingredients of Language (ISBN 0-06-095840-5) is a 1999 popular linguistics
book by Steven Pinker on the subject of regular and irregular verbs. In Pinker's words, the book "tries
to illuminate the nature of language and mind by choosing a single phenomenon and examining it
from every angle imaginable." His analysis reflects his view that language and many other aspects
of human nature are innate evolutionary-psychological adaptations. Most of the book examines
studies of the form and frequency of grammatical errors in English (and to a lesser extent in German)
as well as the speech of brain-damaged persons with selective aphasia.
The title, Words and Rules, refers to a model Pinker believes best represents how words are
represented in the mind. He writes that words are either stored directly with their associated
meanings, in a "mental dictionary", or constructed using morphological rules. Leak and rose, for
example, would be stored as mental dictionary entries, but the words leaking and roses do not
need to be memorized separately, as they can be easily constructed by adding the appropriate
suffixes. By analyzing the English errors children make (such as overgeneralizing morphological
rules to create words like mouses and bringed), he concludes that irregular verbs are not
remembered in terms of the rules that produce them (such as the rule that produces sleep/slept,
weep/wept, keep/kept, etc.), and instead have their past tenses memorized directly.
The words and rules model contradicts previous ideas (both connectionist and Chomskyan)
hypothesizing that irregular past tenses are the result of rules applied based on phonological
similarities. Pinker accepts a weak form of the connectionism model to explain the origin of the
small number of recent irregular verbs that obtained their past tenses due to surface similarity to
other already-irregular verbs. However, Pinker shows research showing these sorts of
generalization to be exceedingly rare in comparison to the overapplication of regular past tense
Page | 165

rules ("add '-ed'") to words with irregular past tenses. His research also examines past-tense
formation among German speakers, further supporting his conclusion.
This fascinating book delves into the structure of language, and the corresponding mental structures
we presumably possess in order to design, use, and understand language. "Words and Rules," like
Pinker's earlier book, "The Language Instinct" (also recommended), is easily readable by a nonexpert, but nonetheless gives a rigorous explanation. I'll give "Words and Rules" a "+".
At issue is how language is organized in our minds-- do we use language by applying a series of rules
(such as, "add -ed for most past tense verbs" in English), or do we store language in the form of many
word lists (such as, "the past tense of 'drink' is 'drank'; the past tense of 'blink' is 'blinked', etc)? [Or,
to use Pinker's definitions: "Words in the sense of memorized links between sound and meaning;
rules in the sense of operations that assemble the words into combinations whose meaning can be
computed from the meanings of the words and the way they are arranged"]. The answer seems to
be that language is a complicated combination of both words and rules.
Much of the book is devoted to justifying this claim, which Pinker notes is not supported by all
linguists. He gives many examples of other theories of language structure, then describes their
shortcomings, for each fails in some cases. His hybrid words-and-rules theory holds up quite well. It
fits the way children learn language (and make early mistakes) and the way patients with certain
types of brain damage misuse language. For instance, people with Alzheimer's disease lose access to
their words lists, while those with Parkinson's disease have difficulty applying rules.
Here's a simplified version of what seems to happen in a normal adult brain (Pinker gives more detail
in the book): Some linguistic forms are stored in memorized lists-- these tend to be irregular verb and
noun forms, such as dig-dug, strike-struck, and child-children. When you go to form a sentence, your
brain looks for forms on these lists. If it comes up empty, then a default rule is applied-- the past
tense is made by adding -ed, or the plural is made by adding -s. All of this occurs in a split second, of
course.
The beauty of the combination words-and-rules system is that it can optimize brain function better
than a system relying only on words or only on rules. If you had to remember every single form
separately, that would take a lot of space-- obviously it is more efficient to have some general rules
for how to relate the forms to each other (such as "add -ed for the past tense"). So why not use
only rules? This turns out to be inefficient also. Not only does this require more computation time,
but it's actually difficult to make a consistent set of rules that can capture all the words we need to
express our complex thoughts. We know because people have tried to devise "logical" languages
that rely on precise rules. They are a disaster. They quickly become unwieldy with literally
hundreds of rules. And this creates a new problem: you have to memorize all those rules! Might as
well go back to memorizing lists of words, right?
So the best compromise for our brains seems to be retaining some irregular forms as words lists, and
using some rules (perhaps less than ten) for regular inflections. That's how we ended up with such a
mishmash language.
But it's even more interesting than that. There is some rhyme and reason to how we inflect even the
irregular forms. It's not just a random choice; the way the word is pronounced, the regular forms it
sounds like, and the historical development of the word all influence the particular irregular
inflection we give it. With some careful study, it is possible to explain why we say drink-drank, but
blink-blinked.
And furthermore, regular and irregular forms are not set in stone. There are mechanisms in language
evolution that allow one to transform itself into the other. Most irregular forms are relics of history-they used to come from applying a rule, but because of shifts in pronunciation or vocabulary, the rule
became less useful, and was eventually not recognized as a rule by a new generation of children.
Hence, the forms were simply memorized. By this mechanism, regular forms can be irregularized
Page | 166

over time. There are other methods of winding up with irregular forms, too, such as combining
several verbs into one. This happened with go-went, which used to be two separate verbs. We just
kept the present form of one and the past form of the other when we combined them.
Regular forms don't have much history, precisely because they are generated on the fly (you don't
have a list in your head with the word "blinked" on it; you literally create this form each time you use
it, then forget it again). But if an irregular form sounds like a regular one, the regular form may "coopt" it, thus regularizing it. Also, unusual words are most likely to be regularized because we forget
our lists if we don't use them enough. This happened with the past tense of "chide," which used to
be "chid." Now we use "chided." Remember, when the irregular form fails to pop up from the list, the
regular rule is applied. So if a word happens to be used less and less frequently, it will become
regularized over time.
(This neatly explains, by the way, why irregular forms tend to be the most common words. Go-went,
am-is-are, do-did, have-had, say-said: these are all very common verbs, and they are all irregular.
Uncommon verbs like avow (declare, acknowledge, confess) and truncate are unfailingly regular. And
newly invented words, which we can't possibly have on our lists, tend to be made into regular forms
by default).
Pinker also applies this nifty (cool) theory to other languages. There is a fascinating chapter on
German, which I appreciated since I also speak German (but even non-speakers will get something
out of it). It happens that German is a language where the "regular" form is not the most common! In
English, it's easy to say that the past-tense "-ed" ending is "regular" simply because it is used so
often. But that is not Pinker's definition of "regular." His idea is that the regular form is the one we
use as the rule, while all other forms are memorized as words. By testing German speakers with
nonsense words, he teased out the actual rule that they apply to generate regular forms, and it is not
the most common inflection. This provides evidence for the mental organization he is talking about-it exists even beyond our conscious knowledge of which words are used most frequently.
However, I was left with a few questions about foreign languages. First, I also study Japanese. This
amazing language has just two truly irregular verbs. All other verbs fall into two perfectly inflected
categories, for which knowing the verb stem tells you all the forms. (OK, OK, there are a few
exceptions even here, but nothing as pronounced as strike-struck or ride-rode). Pinker touches
briefly on Asian languages with a single mention of Chinese. But I was left hungering for a way to
understand Japanese verbs. I guess that was beyond the scope of the book.
But second, the more I thought about foreign languages, the more I wondered whether Pinker's
model is really an optimization issue like I described above. After all, languages range from Japanese- with just two irregular verbs-- all the way to German-- with far more exceptions than rules. If the
combination of "words and rules" is really acting to optimize brain function, why is there such a wide
range seen among the languages of the world? It seems that the brain works just fine for a huge
variety of linguistic structures.
I was convinced by Pinker's thorough research that he's onto something with the idea of having some
words stored on lists, and then applying default rules for cases not covered by the lists. All his results
support this idea. But I'm not sure he's figured out exactly why it works this way.
Anyway, Pinker also relates some of these language ideas to brain organization in an intriguing
speculative section at the end. He talks about the way we "categorize" the world around us into
"family groups" (like "birds", where we have a clear idea of what isn't included, but no exact
definition of what is) and "Aristotelian categories" that are more clearly defined, such as
"grandmother." He points out that much tension arises from the fuzzy borders around family groups
conflicting with the sharp edges of clear categories. This is likened to the tension between words and
rules, from which arises much of the richness of language.

Page | 167

As noted above, this book is a fabulous journey through modern linguistics theory for the nonspecialist. Pinker doesn't oversimplify or talk down, but gives a fully comprehensible analysis of one
of the most interesting parts of human cognition. This is a fairly young field, and it's fun to catch it
near the beginning. There will be a chance to watch it evolve over the coming decades.

Page | 168

Zone of Proximal Development


The zone of proximal development, often abbreviated ZPD, is the difference between what a learner
can do without help and what he or she can do with help. It is a concept developed by the Soviet
psychologist and social constructivist Lev Vygotsky (1896 1934).
Vygotsky stated that a child follows an adult's example and gradually develops the ability to do
certain tasks without help or assistance. Vygotsky's often-quoted definition of zone of proximal
development presents it as the distance between the actual developmental level as determined by
independent problem solving and the level of potential development as determined through
problem solving under adult guidance, or in collaboration with more capable peers.
Vygotsky among other educational professionals believes the role of education to be to provide
children with experiences which are in their ZPD, thereby encouraging and advancing their individual
learning.
The concept of the zone of proximal development was originally developed by Vygotsky to argue
against the use of standardized tests as a means to gauge students' intelligence. Vygotsky argued
that rather than examining what a student knows to determine intelligence, it is better to examine
their ability to solve problems independently and their ability to solve problems with the assistance
of an adult.
Development
The concept of ZPD has been expanded, modified, and changed into new concepts since Vygotsky's
original conception.
The concept of scaffolding is closely related to the ZPD, although Vygotsky himself never mentioned
the term; instead, scaffolding was developed by other sociocultural theorists applying Vygotsky's ZPD
to educational contexts. Scaffolding is a process through which a teacher or more competent peer
gives aid to the student in her/his ZPD as necessary, and tapers off this aid as it becomes
unnecessary, much as a scaffold is removed from a building during construction. According to
education expert Nancy Balaban, "Scaffolding refers to the way the adult guides the child's learning
via focused questions and positive interactions." This concept has been further developed by Ann
Brown, among others. Several instructional programs were developed on the basis of the notion of
ZPD interpreted this way, including reciprocal teaching and dynamic assessment.
ZPD has been implemented as a measurable concept in the reading software Accelerated Reader.
The developers of Accelerated Reader describe it as "the level of difficulty [of a book] that is neither
too hard nor too easy, and is the level at which optimal learning takes place" (Renaissance Learning,
2007). The STAR Reading software suggests a ZPD level, or it can be determined from other
standardized tests. The company claims that students need to read books that are not too easy, so as
to avoid boredom, and not too hard, so as to avoid frustration. This range of book difficulty, so
claimed, helps to improve vocabulary and other reading skills.
While the ideas of Vygotsky's ZPD originally were used strictly for one's ability to solve problems,
Tharp and Gallimore point out that it can be expanded to examining other domains of competence
and skills. These specialized zones of development include cultural zones, individual zones, and skilloriented zones. Of these skill-oriented zones, it is commonly believed among early childhood
development researchers that young children learn their native language and motor skills in general
by being placed in the zone of proximal development (Wells pg. 57).
Through their work with collaborative groups of adults, Tinsley and Lebak (2009) have identified the
"Zone of Reflective Capacity." This zone shares the theoretical attributes of the ZPD, but is a more
specifically defined construct helpful in describing and understanding the way in which an adult's
capacity for reflection can expand when collaborating with other adults with similar goals over an
extended period of time. Tinsley and Lebak found that as adults shared their feedback, analyses, and
Page | 169

evaluations of one another's work in a collaborative working environment, their potential for critical
reflection expanded. The zone of reflective capacity expanded as trust and mutual understanding
among the peers grew.
The zone of reflective capacity is constructed through the interaction between participants engaged
in a common activity and expands when it is mediated by positive interactions with other
participants, exactly along the same lines as the ZPD, as Wells (1999) described.

Page | 170

Milestones
1786 Indo-European Language, Sir William Jones
1820s Human language as a rule-governed system, Wilhelm von Humboldt
1916 Cours de linguistique gnrale - Ferdinand De Saussure
1920s Structuralism in language, Saussures students
1920s Markedness Theory, Roman Jakobson
1939 1945 Language teaching courses for soldiers, Leonard Bloomfield
1940s Action Research, Kurt Lewin
1940s Attribution Theory, Bernard Weiner
1957 Verbal Behaviour, B. F. Skinner
1957 Syntactic Structures, Avram Noam Chomsky
1956 Language, Thought, and Reality, Benjamin Whorf
1958 Wug Tsest, Jean Berko Gleason
1957 Contrastive analysis, Lado, R.
1959 - Wilder Penfield, Critical Age Hypothesis
1959 - 13 Design Features of Language, Charles F. Hockett
1962 Speech acts, J. L. Austin
1965 American Sign Language as a natural language, Willia Stokoe
1966 Language Aptitude Battery, Paul Pimsleur
1967 Critical Age Hypothesis (popularized) Eric Lenneberg
1974 Error Analysis, Corder, S. P.
1977 Markedness, Eckman, F.
1980s Second Language Learning Theory Stephen Krashen
1983 The theory of multiple intelligences Howard Gardner
1986 Acculturation model, Schumann
1989 Language and Power (Beginnig of Critical Discourse Analysis), Norman Fairclough
1994 Language Instinct Steven Pinker
1997 Large corpus studies, Leech, G.

Page | 171

1. Acquisition-learning hypothesis
2. Acculturation
3. Action Research
4. Affective filter
5. Aphasia
6. Arbitrariness
7. Attribution Theory (Weiner)
8. Zone of Proximal Development
9. Autonomy (Learner)
10.Baby Talk Child Directed
Speech
11.Bloom's Taxonomy
12.Bottom-Up Approach
13.Clinical Supervision
14.Cognitive Linguistics
15.Cognitive Linguistics II
16.Communication Strategy
17.Communicative Competence
18.Competence / Performance
19.Comprehensible input
20.Connectionism
21.Constructivism
22.Contrastive Analysis
23.Conversational Maxim
24.Critical Linguistics
25.Critical Pedagogy
26.Critical Period Hypothesis
27.Deficit Hypothesis
28. Diachronic Linguistics
29. Discourse Analysis
30. Critical Discourse Analysis
31. Experiential Learning
32. Formative Assessment

33. Functions of Language


34. Generative Grammar
35. Government-And-Binding
Theory
36. Hockett's 13 Design Features
of Language
37. Innateness Hypothesis
38. Interlanguage
39. Interlanguage Fossilization
40. Joint attention
41. Kelly's Personal Construct
Theory
42. Language Acquisition
43. Language learning aptitude
44. Language Disability
45. Language Instinct
46. Language Ego
47. Learning Styles
48. Learning Theory
49. Language Transfer
50. Lateralization of brain
function
51. Learning Strategies
52. Linguistic Relativity
53. Manner of Articulation
54. Mean Length of Utterance
55. Markedness
56. Meaningful vs. Rote Learning
57. Metalinguistic awareness
58. Microteaching
59. Mirror Neurons
60. Monitor hypothesis
61. Motivation

62. Multiple intelligences


63. Neurolinguistics
64. Origin and Evolution of
Language
65. Phonotactics
66. Phrase-Structure Grammar
67. Pidgin
68. Place of Articulation
69. Pragmatics
70. Principles and Parameters
71. Prosody (linguistics)
72. Saussurean Paradox
73. Sex Differences In Language
74. Stylistics
75. Suprasegmental
76. Summative Assessment
77. Transformational Grammar
78. Triangulation
79. Order of acquisition
80. Poverty of the stimulus
81. Problem-based learning
82. Speech Act
83. Sign Language
84. Social Constructivism
85. Syllabus Design
86. Synaptic pruning
87. Task-Based Language Learning
88. Theory of Mind
89. Universal grammar
90. Vygotsky's influence on
Krashen's second language
acquisition theory

Page | 172

You might also like