Professional Documents
Culture Documents
Learning Competencies:
The learner:
research design
- serves as the structure of the study
- gives direction to the study
- makes research more systematized
1. Survey research
- uses interviews, questionnaires, polls, and other similar instruments in order to gather
data
- instruments
- usually distributed to a random sample in a population to obtain accurate
results
2. Correlational research
- study the connection between two variables
- possible that there is no direct cause and effect relationship between the variables
3. Causal-comparative research
- verify a suspected cause and effect relationship between variables
- compares the variables but it does not focus on the relationship
4.Experimental research
- based on a formed hypothesis
- aims to prove or disprove the hypothesis
SAMPLING DESIGN
entire population
- nearly impossible to collect data
sample
- used for data collection instead
- small representation of the population being studied
quantitative studies
- make use of larger sample sizes because it relyon statistics and numbers
sample
- should be chosen randomly
- should be in proportion to the population size
- used in order to make data collection easier and possible to do
Advantages:
a. reduces the cost of research
b. data collection much easier and faster
c. better manipulation and control of data
d. easier to avoid errors
e. easier to analyze data
Disadvantages:
a. could have some biases due to some external factors
b. require the researcher to know about statistics
Non-Probability Sampling
- a sampling technique that does not give all the individuals in the population equal
chances of being selected
Subjects are selected on the basis of their:
a. accessibility
b. by the purposive judgment of the researcher
Downside of non-probability sampling:
- unknown proportion of the entire population was not sampled
- may not represent the entire population accurately
Non-Probability Sampling Methods:
1. Convenience sampling
- most common of all sampling methods
- samples are selected because they are accessible to the researcher
- samples are chosen because they are easy to recruit
- easiest, cheapest and least time consuming
2. Consecutive sampling
- very similar to convenience sampling except:
- it includes ALL accessible subjects as part of the sample
- the BEST of all non-probability samples
- it includes ALL subjects that are available
- makes the sample a better representation of the entire
population
3. Quota sampling
- the researcher ensures equal or proportionate representation of subjects depending
on which trait is considered as basis
Example: basis of quota: college year level
Sample size = 100
- The researcher must select 25 1st year students, 25 2nd
year students, 25 3rd year students and 25 4th year students.
Bases of quota:
usually:
- age
- gender
- race
- religion
- socioeconomic status
4. Judgmental sampling
- more commonly known as purposive sampling
- subjects are chosen to be part of the sample
- with a specific purpose in mind
- the researcher believes that some subjects are more fit for the research compared to
other individuals
5. Snowball sampling
- usually done when there is a very small population size
- the researcher asks the initial subject
- to identify another potential subject who also meets the
criteria of the research
INSTRUMENT DEVELOPMENT
Steps:
1. Background
- involves looking into the:
- purpose
- objectives
- research questions
- hypothesis of research
- gives a good idea on what type of data to collect
- helps create a more effective questionnaire
2. Questionnaire Conceptualization
- involves formulating some questions and statements for the questionnaire
- the contents from the framework and the RRL
- converted into questions
- can be used to gather data
- must be in line with the objectives of research
4. Establishing Validity
- looking over the draft and making sure that it will be able to measure the data
correctly and properly
Guide questions to make sure that the questionnaire is done correctly (Journal of Extension):
a. Is the questionnaire valid? In other words, is the questionnaire measuring what it
intended to measure?
b. Does it represent the content?
c. Is it appropriate for the sample/population?
d. Is the questionnaire comprehensive enough to collect all the information needed to
address the purpose and goals of the study?
e. Does the instrument look like a questionnaire?
Help improve the readability and validity of the material:
a. Fog Index
b. Flesch Reading Ease
c. Flesch-Kinkais Readability Formula
d. Gunnin-Fog Index
5. Establishing Reliability
- involves making sure that the questionnaire is reliable
- making sure that the questionnaire is accurate and precise in collecting and
measuring the data
- doing pilot test
- giving out the questionnaire to a few people to make sure that it will perform
correctly
RELIABILITY OF DATA COLLECTION INSTRUMENT
Reliability
- a measure of the degree to which a research instrument yields consistent results after
repeated trials
Instrument
- considered reliable
- when it can measure a variable accurately and obtain the same results
over a period of time
Cronbachs alpha
- most common measure of reliability
- usually interpreted as the mean of all possible split-half coefficients
Pilot test
- conducted to determine the reliability of research instruments
- instruments are administered to an independent sample
- a sample which is not part of the final
sample but which enjoys the same
characteristics as the study sample
Reliability
- may be estimated through a variety of methods:
1. Single-administration method
- requires only one assessment
- include split-half method
- uses two halves of a measure that is odd and even items as alternative
forms
- involves:
a. administering a test to a selected sample of individuals
b. splitting the test in half (odds and evens)
c. correlating scores on one half of the test with scores on the other half of the test
correlation between the two set of scores
- used to estimate the reliability of the instruments
2. Multiple-Administration method
- includes test-retest method
- involves administration of the instrument twice within a given time interval
(could be after two weeks)
2 sets of scores
- correlated to establish the reliability of the instruments
Reliability Coefficients
Alpha = .7396
Conclusion:
- reliability was computed with the help of SPSS using
Cronbachs alpha for a 30-item questionnaire
- an alpha of .7396 was obtained upon computation
- questionnaires accepted as reliable
Tips in Instrument Development:
1. Determine the purpose
2. Decide what you are measuring
3. Know your respondents
4. Choose a collecting method
DATA COLLECTION
quantitative data
- can be collected using a number of different methods and from a variety of sources
2 types of data
1. Primary data
- also known as first hand data
- data were collected directly from the sources and are the original data
2. Secondary data
- those which have been collected by someone else and have been passed through
statistical processes
Survey
- the most common method of gathering data, diagnosing and solving of social problems in
quantitative studies
- captures information through the input of responses to a research instrument
containing questions ( example: questionnaire )
- can be:
a. input either by the respondents themselves
Example: complete online survey
b. the researcher can input the data
Example: phone survey, mall intercept
Main methods for distributing surveys:
a. postal mail
b. phone
c. website
d. in person
DATA ANALYSIS
- a process of inspecting, cleaning, transforming, and modeling data with the goal of:
- underlining essential information, suggesting conclusions, and supporting
decision making
- it is the process which follows after data collection
1. Data Cleaning
- done to remove ambiguous elements
content analysis
- applied to capture information from the open-ended questions which are also subject
to quantification in a quantitative research
2. Data Coding
- refers to the process of assigning numerals or other symbols to answers
- so that responses can be put into a limited number of
categories
- a vital step where the collected data is translated into values suitable for computer entry
and statistical analysis
variables
- created from data to simplify the analysis
- meant to summarize and reduce data, attempting to represent the essential
information
3. Data Presentation
tables and figures
- used to summarize the coded data
spreadsheets and advanced statistical packages
- functions are provided within the program
- summarize data into either tables or figures
quantitative data
- summarized in order to help the process of data presentation
- involves the use of descriptive statistics
such as:
a. frequencies
b. percentages
c. means
d. standard deviation
- also presented using inferential
statistics such as:
a. t-tests
b. Analysis of Variance
c. Multiple Analysis of Variance
d. Regression
e. factor analysis
4. Data Interpretation and Discussion
- follow once the data is presented
Data interpretation
- involves the provision of comments on the results obtained from the investigation
- done based on the key findings of the study
- requires deep understanding of literature and issues under investigation
- must be within the framework of what the data analyzed, suggests and not an
exaggeration
- should be done in context and supported by literature