You are on page 1of 64

Data collection for

Quantitative
Chapter 6

Overview of the chapter


Types of sources- secondary and primary
Measurement and scaling
Differences between measurement and
scaling
Concepts, indicator and variable
Level of measurement scales
Types of scale
Goodness of measurement instrument

Data collection
One of the important activities in the
research process
Data can be collected in many ways
and using various types of methods
Data can be classified into
quantitative and qualitative

Data collection method


Sources of data can be secondary or
primary
Methods for collecting quantitative
data can be in the form of
survey/questionnaire or e-survey
In terms of setting, data can be
collected either in a natural setting
or lab setting

Where quantitative data comes


from
Primary Sources:
Surveys
Content analysis
Observation studies
Archival studies
Other
Secondary Sources:
Official statistics (surveys and other material)
Archived academic surveys
Other archived datasets

A lot of secondary datasets are now available on the web


and access is free for those in academic environments.
You can browse some data at:
UK Data Archive www.data-archive.ac.uk/
Economic and Social Data www.esds.ac.uk/
Office for National Statistics www.statistics.gov.uk/

Primary sources
Data can be collected by acquiring firsthand
information from the respondents
Is analysis conducted by the investigator(s) or
institution that collected the data.
For instance, sending mail survey to respondents
or interviewing the respondents
It can be collected also from observation. For
observation, the data is documented in a form and
it can be in the form of non participatory
observation or participatory observation.
Observation is mainly used in qualitative approach

Observation
Researcher will indicate the area that she
or he would like to observe
Take note important points related to the
area that he or she observed
Researcher can decide whether to involve
with the participant or not to participate
For quantitative researcher, observation is
one of the ways to gain initial
understanding about the area to be studied

Questionnaire
Also known as survey
It provides a quantitative or numeric
description of the trends, attitudes,
opinions or perceptions of a
population by studying a sample of
that population
An instrument of measurement

Interview
Quantitative data can also be
acquired from interview for instance,
when asking participants about their
ages, income level
Interview can be categorised as
structured, semi structured and
unstructured

Secondary sources
Is any further analysis of an existing
dataset that produces results or
conclusions other than those produced as a
result of the first report on the inquiry.
Often carried out by different people than
those that collected the data.
Information are obtained from articles,
research report and etc
Quantitative data can be acquired from
database developed by a company

Quantitative data
Data are used to classify groups.
Examples; numbers, quantity,
prevalence, incidence.
Variables can be classified as
physical (population, infrastructure),
social (poverty, slums), spatial (land
use, proximity) etc.

Quantitative data example

Measurement
Measurement encompasses scales and scaling
used in measuring concept as well as reliability
and validity of the measures used
Measurement in quantitative research should
fulfill
1. Validity - Are you measuring what you think you are measuring?
2. Objectivity - researchers stand outside the phenomena they study.
Data collected are free from bias.

3. Reliability

- if something was measured again using the same


instrument, would it produce the same or nearly the same results?

4. Accuracy

Are the methods adequate to answer your questions?;


reveal credible information?; convey important information?

5. Precision How much trustable, how confident is the result.

Measurement
Is about how variables in the
framework is measured
Operationalising the variables
Developing scale

Scaling
Scaling is construction of instruments
for measuring abstract constructs.
eg using Likert scale to measure
perception of the respondent on UUM
postgraduate programme

Linking Data Collection


to Variables and Questions
Flow of Activities

Example

Identify the variable

Self-efficacy for learning from


others

Operationally define the variable

Locate data (measures,


observations, documents
with questions and scales)

Collect data on
instruments yielding
John
W. Creswell scores
numeric
Educational Research:
Planning, Conducting, and
Evaluating Quantitative and

6.16

Level of confidence that an


individual can learn something
by being taught by others
13 items on a self-efficacy
attitudinal scale from Bergin
(1989)
Scores of each item ranged
from 0-10 with 10 being
completely confident

Information to Collect:
Types of Data Measures
An instrument is a tool for
measuring, observing, or
documenting quantitative data.
Types of instruments
Performance measures (e.g., test
performance)
Attitudinal measures (measures feelings
toward educational topics)
Behavioral measures (observations of
John W. Creswell
behavior)
Educational Research:
Planning, Conducting, and
6.17
Evaluating
and
Quantitative
Factual
measures
(documents, records)

Locating or Developing an
Instrument for Data Collection
Look in published journal articles
Run an ERIC search and a descriptor for the
instrument you want in an online search to see if
there are articles that contain instruments
Check Tests in Print
Check Mental Measurements Yearbook published
by the Buros Center at the University of
Nebraska, Lincoln, NE (http:unl.edu/Buros)
Develop your own instrument
John W. Creswell
Educational Research:
Planning, Conducting, and
Evaluating Quantitative and

6.18

Criteria for Choosing a Good


Instrument
Have authors developed the instrument
recently?
Is the instrument widely cited by other authors?
Are reviews available for the instrument?
Does the procedure for recording data fit the
research questions/hypotheses in your study?
Does the instrument contain accepted scales of
measurement?
Is there information about the reliability and
validity of scores from past uses of the
instrument?
John W. Creswell
Educational Research:
Planning, Conducting, and
Evaluating Quantitative and

6.19

Variables
Quantitative Analysis involves the study
of variables.
Variables are attributes that vary across
cases, and/or within a case over time.
For example, gender, age, happiness,
political association, occupation,
number of students on a course
The process of going from a concept to
a variable involves operationalisation of
the research question.
When variables are produced in surveys
they are often the product of closed
questions.
Closed questions are questions in which
possible answers are given and the
respondent selects from these: all of
the questions in last-weeks survey
were closed as you did not have the
option to write your own answers.

Aspects of Measurement Validity in Social


Research
Are we measuring what we intend to measure ?
OR , Are we measuring the correct Concept ?

What is a Concept in sociological


research ?
Concept :
1. Mental Constructs, or images, developed to
symbolize ideas, persons, things, or
events.
(symbolic interaction)
2. An Organized principle used to differentiate
those classes of phenomena with common
characteristics from other classes of
phenomena.

Conceptualization
Conceptualization is the process of
specifying what we mean by a term. ( A
clear, verbal specification of your variable
(concept) so that others know what it is
and can place cognitive borders around it.

In Deductive research, conceptualization helps to translate


portions of an abstract theory into specific variables that can
be used in testable hypotheses.
In Inductive research, conceptualization is an important part
of the process used to make sense of related observations.

Conceptualization
Continued
If we hypothesized that lower social
status in college students directly
correlates to an increase in deviant
behavior, then what exactly is
Deviant Behavior ? (The conceptual DV we
are measuring)

-Deviant

Behavior as: Causing physical harm,


Talking out loud in class, Underage Drinking,

The Multi-Dimensions of
Conceptual Definitions
What is a Table ?
What is a Car ?

What is Religiosity ?

1. Ideological Dimension
2. Experiential Dimension
5. Intellectual Dimension

3. Ritualistic Dimension
4. Consequential Dimension
* What about

TALL

VS

Concepts in Research
Lets do a study to see if there is a Direct
relationship between Social Status of parents and
Deviant Behavior in college students.
Conflicts in Measurement Validity
1. All concepts are Multi-Dimensional

What do we mean by social status?

Social Status
Power

Prestige

Privilege

Process of

Indicators

Income

E A B

Conceptualization

Possessions, Fashion, Teeth, Self-Report,


Jewelry, etc.

And
Operati
onalizatio

Operationalization
In Social Research, there are
Operations of Measurement.
Operation: A procedure for identifying or indicating
the value of cases on a variable. (Instructions)
Operationalization: The process of specifying the
operations that will indicate the value of cases on a
variable

Lets say we were to Operationalize the


effects of Sending Flowers and Get well
cards to a Patient, as a method of
Increasing Health.

How will we measure Increasing


Healthof Patients ?

Shorter hospital stays

Operationalized
as Days spent in the hospital

Normal heart rates and blood


pressures Operationalized in Beats
per minute and Diastolic and systolic
pressures.
Increased morale Operationalized by
asking patients a series of questions

Ratio

Interval

Levels of MeasurementOrdinal
Nominal

The Level of Measurement is the mathematical


precision with which the values of a variable can
be expressed.
The Nominal (name) level of measurement, which
is qualitative, has no mathematical interpretation;
The Quantitative levels of measurementOrdinal,
Interval, and Ratioare progressively more precise
mathematically.
When we know a variables level of measurement, we
can better understand how cases vary on that variable
and so understand more fully what we have
measured.

Indicators and Dimensions


Indicators are Observations We
Choose to
Represent as a Variable of a Concept
We
Wish to Study. Going to religious
place such as mosque may be
chosen as an Indicator of Religiosity.
Dimensions are Facets or Aspects
of

*CAUTION *
LEVELS OF MEASUREMENT CANT BE
DETERMINED WITHOUT
CONSIDERING BOTH THE CONCEPT
AND THE MEASURE
(Conceptualization and
Operationalization)

Instrument/Measurement
Instrument development
~adapted or adopted
~adapted - acquired instrument from
publication (journal, book and so on) on topic
related to the study.
~ adopted is based on the established
instrument without changing the original
content.

Level of Measurement Scales


Scales in measurement
a) Nominal Scale
b) Ordinal Scale
c) Interval Scale
d) Ration Scale

Nominal
Scale

Is one that allows the


researcher to assign
subjects to certain
categories or group.
Categories individuals or
objects into mutually
exclusive and collectively
exhaustive groups.
Gives some basic,
categorical, gross
information.

Is not only categories the


variables in such a way as to
denote differences among the
various categories, it also
rank-orders the categories in
some meaningful way.

Ordinal
Scale

With any variable for which


the categories are to be
ordered according to some
preference, the ordinal scale
would be used.
Helps the researcher to
determine the percentage of
respondents who consider
interaction with others to be
most important, those who
consider using a number of

Interval
Scale

Allows us to perform certain


arithmetical operations on the
data collected from the
respondents.
Not only groups individuals
according to certain
categories and taps the order
of these groups, it also
measures the magnitude of
the differences in the
preferences among the
individual.

Ratio
Scale

Not only measures the


magnitude of the differences
between points on the scale
but also taps the proportions
in the differences.
Is the most powerful of the
four scales because it has a
unique zero origin and
subsumes all the properties of
the other three scales.

Type of scales
Numerical rating scale measuring
from poor to excellent how would
you rate your supervisor from 0 to
10?

Thurstone
Rating the Potential Scale Items
Judges rate the items on 11-point scale
1 = agreement indicates very low
amount of the attribute
11 = agreeing indicates very high
amount of the attribute
encourage judges to use entire range of
scale, assigning some statements to
each of the 11 values sort them into
11 piles.

Thurstone Scales:
Method of Equal-Appearing Intervals

Define the Concept


Generate Potential Scale Items
about 100 statements
differ with respect to the extent to which
agreement indicates presence of the
attribute to be measured

Thurstone
Computing the Scale Score
Values for Each Item
Find median and SD or inter-quartile
range
Arrange in table
Sort by median
Within items with same median, sort by
SD or inter-quartile range

Thurstone
Select the Final Scale Items
1 (or 2 or 3) item(s) for each possible
scale score value
Prefer items with low variability among
judges
End up with 10-30 items
Items and scale score values are shown
on the next slide.

People with AIDS deserve what they


got. (1)
AIDS is good because it helps control
the population. (2)
AIDS will never happen to me. (3)
I can't get AIDS if I'm in a
monogamous relationship. (4)
It's easy to get AIDS (5)
Because AIDS is preventable, we
should focus our resources on
prevention instead of curing (5)

People with AIDS are like my parents


(6)
If you have AIDS, you can still lead a
normal life (8)
AIDS doesn't have a preference,
anyone can get it (9)
AIDS is a disease that anyone can
get if they are not careful (9)
Aids affects us all (10)
People with AIDS should be treated
just like everybody else. (11)

Thurstone
Administer the Final Scale
Randomize the order of the items
For each item, respondent chooses
Agree or Disagree
For each item the scale score is the
median from the judges ratings
Total Score = mean scale score for items
on which the respondent agreed.

Thurstone
Thurstone scales are rarely used
these days
They are just too much trouble to
create.
Likert scales were developed in
response to this difficulty

Guttman Scaling
Define the Concept
Generate Potential Scale Items
Evaluating the Potential Items
For each item, judges are asked if
someone high in the attribute would
agree with the statement Yes or No.

Guttman
Conduct a Scalogram Analysis of
Judges Responses
use special software to do this
if successful, it will create an ordered list
of items such that
agreeing with the first item indicates you
have at least a little of the measured
attribute
agreeing with the second indicates you have
at least a little more of the attribute
etc.

Guttman
The scalogram analysis also computes a
scale score value for each statement.
See the example in Trochims Internet
document Guttman Scaling (reproduced
on the next slide).
It is assumed that anybody who would
agree with the nth item would also agree
with all preceding items.
The order of the items may be
scrambled prior to administering the
scale.

I believe that this country should allow in


more immigrants.
I would be comfortable with new
immigrants moving into my community.
It would be fine with me if new immigrants
moved onto my block.
I would be comfortable if a new immigrant
moved next door to me.
I would be comfortable if my child dated a
new immigrant.
I would permit a child of mine to marry an
immigrant.

Guttman
Administer the Final Scale
Respondents are asked to check items
with which they agree
Respondents score = sum of the scale
score values for checked responses.

Like Thurstone scales, Guttman


scales are not often used these days.

Likert Scales
Define the Concept
Generate Potential Items
About 100 statements.
On some, agreement indicates being
high on the measured attribute
On others, agreement indicates being
low on the measured attribute

Likert
Instead of a dichotomous response scale
(agree or disagree), use a multi-point
response scale like this:

Likert
Evaluating the Potential Items
Get judges to evaluate each item on a 5point scale

1
2
3
4
5

-- Agreement = very low on attribute


Agreement = low on attribute
Agreement tells you nothing
Agreement = high on attribute
Agreement = very high on attribute

Select items with very high or very low


means and little variability among the
judges.

Likert
Alternate Method of Item
Evaluation
Ask some judges to respond to the items
in the way they think someone high in
the attribute would respond.
Ask other judges to respond as would
one low in the attribute.
Prefer items that best discriminate
between these two groups
Also ask judges to identify items that are
unclear or confusing.

Likert
Pilot Test the Items
Administer to a sample of persons from
the population of interest
Conduct an item analysis
Prefer items which have high item-total
correlations
Consider conducting a factor analysis

Likert
Administer the Final Scale
on each item, response which indicates
least amount of the attribute scored as 1
next least amount response scored as 2
and so on
respondents total score = sum of item
scores or mean of item scores
dealing with nonresponses on some
items
reflecting items (reverse scoring)

Semantic Differential Scale


This technique assesses the extent of
the subjects agreement with items,
where the response for each item is
shown on a continuum.
Example:
Skipping class in Sociology 302 is...
Good for me.
for me

1___2___3___4___5

Bad

Semantic Differential

This technique can be used to ask many questions


in a short amount of space (mailed survey) or
time (telephone survey).
The technique is intuitively appealing to most
persons.
The technique provides continuous-level data.
The technique can become tiresome if used too
extensively on a questionnaire.
It sometimes can be difficult to label the endpoints of a semantic scale.

Goodness of Measurement
Instrument
To know whether the instrument
measures what it supposed to
measure
To know whether it consistently
measures what it intends to
Can be tested through reliability and
validity test

VALIDITY
How do I know that the method I am using is really
measuring what I want it to measure?
Face validity. Does your method appear appropriate to
measure what you want it to measure at first glance?
Content validity. This is similar to face validity, except that
it refers to the initial assessment from an experts point of
view.
Predictive validity. Can your measures predict future
behaviour?
Construct validity. Does your data correlate with other
measures?

Reliabilit
y

Factor
Analysis

Is established by testing for


both consistency and stability.
Cronbachs alpha is a
reliability coefficient that
indicates how well the items in
a set are positively correlated
to one another.

The results of factor analysis


(a multivariate techniques will
confirm whether or not the
theorized dimensions emerge.
Factor analysis would reveal
whether the dimensions are
indeed tapped by the items in
the measure, as theorized.

Sensitivity

Robustness of the measurement in


collecting data

Exercise # 1
Based on the journal article, please
identify and explain:
a) The framework including variables
involved in the study including the
independent and dependent and others
b) The objectives or research questions
c) The research design
d) The data collection method- including
the sampling technique, sample size,
population, the instrument
e) Data analysis used in the study

The End

Q & A Session

You might also like