Why? versatile- can be used to study many issues

Download 39.4 Kb.
Size39.4 Kb.
In Survey research information is obtained from a sample of individuals through their responses to questions about themselves to others.

Surveys are the most popular form of social research -


versatile- can be used to study many issues

efficient – can collect info from large populations at relatively low cost and quickly; can have many variables and can be geographically diverse

generalizability – lend themselves to probability sampling – can develop a representative picture of attitudes and characteristics of a large population through use of a smaller sample population.

omnibus surveys – great for surveys like General Social Survey which cover a range of topics of interest to different social scientists
Errors in Social Research

Problems can result from sampling, measurement and design issues. Society is dynamic and survey is a one shot deal - results often differ because of societal changes.

Researcher must minimize two types of error: Errors of observation (poor measurement of cases that are surveyed) and Errors of nonobservation (omission of cases that should be surveyed.
Nonobservation errors caused by: Individuals don’t respond, poor sampling frame used, and sampling error in the random sample (characteristics of the sample members don’t match the characteristics of the population.
The Survey instrument (questionnaire or interview schedule) should be well thought out, clear, integrated as a whole with questions complementing each other.

Survey questions must be asked of many people, not just one. The same instrument must be used, the questions have to be understood by all participants in the same way.
Basic principles of Question Writing

  1. Avoid confusing Phrasing such as double negatives and double-barreled questions.

  2. Use screening questions: filter question permit you to move the person to another part of the survey based on their responses. The person who answers no to a particular question is skipped ahead to another section. The person who answers yes goes to a contingent question relevant to people who answered yes.

  3. Don’t use words that trigger bias. Words that are loaded or leading to produce misleading answers. Another problem is response categories that do not reflect the full range of possible answers. In response categories that are continuums, the number of positive and negative categories must be balanced so that one end of the continuum doesn’t seem more attractive.

  4. Avoid making either disagreement or agreement disagreeable. Sometimes using words like “to what extent do you support or oppose the issue” works better.

  5. Minimize Fence-Sitting. If people do not have strong feelings about an issue they often choose a neutral category. You only want to include neutral categories when you want to know how many fence-sitters you have.

  6. Beware of how you use the Don’t know or no opinion response category. Omit it if you feel that people will choose it because they are reluctant to express their real opinion. These people float to the no opinion category. The result is forced-choice questions. However, forcing people to answer about an issue they know nothing about can also produce unreliable data.

  7. Questionnaires lend themselves best to fixed response choice answers. These provide one and only one possible response for everyone who is asked the question. The response choices must be exhaustive (that means adding an ‘other’ category to provide everyone with a place to answer) and mutually exclusive. Ranges of ages, incomes, years of schooling etc. should not overlap and should not leave out any values within the range. The exception is ‘check all that apply’ questions but these should be kept to a minimum.

  8. Sometimes it is necessary to use more than one question to jog peoples memories about past events. This technique is used in cognitive interviewing.

  9. Pay attention to the order of questions, since questions that follow or precede another questions can influence the answer given. Sort the questions into thematic categories which become separate sections of the questionnaire. The first questions should reflect the primary purpose of the study. It should be interesting and easy.

Response categories to questions can be fixed choice or open-ended.

Open-ended questions are necessary to get at the meaning subjects attach to their answers. The respondent provides the answer in his/her own words. These can be lengthy and disorganized
In Closed-ended or fixed choice response categories to questions the researcher provides all the answer categories and the respondents check or circle their choice
Refine and Test Questions

All questionnaires and/or interview schedules must be pre-tested on a small sample (pilot study) that is like the larger population or sample you will use in the actual study. Focus groups are often used to assist in formulating content of questionnaires and questions.
Interpretive Questions permit us to better understand what the answers tell us. We use interpretive questions to figure out:

1. What do the respondents know?

2. What relevant experiences do the respondents have?

3. How consistent are the respondents attitudes and do they express some larger perspective or ideology?

4. Are respondents actions consistent with their expressed attitudes?

5. How strongly are the attitudes held?
Maintain consistent focus. Eliminate irrelevant questions. Development of the survey should be guided by a clear conception of the research problem under investigation and the population to be sampled.
Make the Questionnaire Attractive and the path through the questionnaire should be identified with words, arrows or graphics. A mailed questionnaire must come with a cover letter. An interview must begin with an introductory statement to draw the respondent in, make then interested and make them feel that you and the research are credible.
Organizing Surveys – 5 basic Survey Designs
There are five basic social science survey designs: mailed, group-administered, phone, in-person and electronic. Each design differs from the other in several important features which are: (See Exhibit 8.4, pg. 247 in Schutt text to see how these features apply to the 5 designs)

  1. Manner of Administration – Is the questionnaire completed by the respondent or does the researcher ask the questions and record the answers?

  2. Questionnaire Structure- Is the questionnaire highly structured or unstructured?

  3. Setting-Are the questionnaires being answered in an individual or group setting?

  4. Cost – Each of the 5 survey designs have different cost and time expectations.

Mailed, Self-Administered Surveys – questionnaire is mailed to the respondent who then administers the survey themselves. Mailed questionnaires typically get low response rates so researchers have to do follow up with non-respondents.
Other techniques include sending out a brief letter before the questionnaire to alert people that it is coming and about its importance, a personalized cover letter with a self-addressed stamped return envelope, a token monetary reward, sending a reminder postcard after the initial mailing, send out replacement questionnaires with a new cover letter to non-respondents after 2-4 weeks, or after 6-8 weeks use a different mode of delivery and different survey design for non-respondents.
Group-Administered Surveys – If the questionnaire is administered in group setting response rates are higher. The questionnaire is distributed and collected in a group setting.

It requires a captive audience and is seldom feasible. However some groups lend themselves more to these types of surveys such as students, employees, members of the armed services, and institutionalized populations.
Populations taking questionnaires in a group setting often feel coerced to participate and less likely to answer honestly. They may believe the researcher is not independent of the sponsoring organization such as their school or work facility.

Telephone Surveys - Interviewers question respondents over the phone and then record their answers. Phone interviews are safe, efficient, large samples can be used, turnaround is fast. However the validity of phone surveys comes into question when they do not reach the proper sampling frame and do not get enough complete responses to make the results generalizable.
Most phone surveys use random digit dialing in the sampling process. The person doing the survey works from a phone interview schedule and uses CATI (Computer-Assisted Telephone Interviewing)
The research may require multiple call backs but there is software available to handle all this. Phone interviewing might be able to get people to go as long as 30 minutes, but 10-15 is more likely. The problem is that people are often biased negatively because of telemarketing and the impersonal nature of phone interviewing. Careful interviewer training is necessary.

In-person Interviews –It is a face-to-face interaction, so longer and more complex questions and numbers of questions can be used. The interviewer can monitor the conditions under which interview takes place, can probe for meaning to be sure subjects understand the questions and this results in high response rates.
The order in which the questions are read and answered can be controlled by the interviewer and the physical and social circumstances of the interview can be monitored. However, every respondent should have the same interview experience, yet it should seem like a personalized interaction. Cost can be high due to training of interviewers and it can be a time consuming process.
The interviewers can use CAPI (Computer-assisted personal interviewing) to increase control of the interview. The interviewers carry a laptop that displays the interview questions, processes the answers the interviewer types in and checks responses to make sure they fall into the allowed ranges.
It is more difficult for respondents to give honest answers about sensitive personal matters in the presence of the interviewer so rapport is especially important.
Electronic (Web) Surveys – this method uses the Internet to conduct research. They can be delivered via E-mail or via the Web. Email surveys are more limited in scope because they cannot be lengthy but Web surveys require programming expertise by the researcher and staff. Web based surveys can be long, have many graphic and typographic elements, can have links associated with them, and can utilized pull-down menus. Pictures and audio segments can also be used. Answers are recorded directly into the researcher’s database so data entry errors are eliminated and reports can be generated quickly.
The percentage of electronic surveys is growing rapidly.
IVR – Interactive Voice Response systems allow the Internet survey to be achieved with a telephone-based system. Respondents receive automated calls and answer questions by pressing numbers on their touch-tone phone or speaking numbers that are interpreted by computerized voice recognition software. However, at this time this is a very impersonal approach.
Mixed-Mode Surveys – Researchers can combine different survey designs to maximize the quantity and quality of data.
Which survey Designs should be used when? (See Exhibit 8.17) Pg. 303 in Schutt Text)
Ethical Issues in Survey Research - Survey research poses fewer ethical issues than experimental or field research designs. Only in group-administered surveys does the issue of participation being voluntary arise.
Confidentiality is the primary ethical concern. The answers to some questions might prove damaging to subject if their answers were disclosed. To avoid doing harm to the subject, it is critical that the researcher preserve the confidentiality of the subject. Nobody but the research personnel should have access to information that could link the respondents to their responses. ID numbers not names should be used to identify respondents.
Anonymity is also an issue. If no identifying information is recorded to link respondents to their responses then follow-up attempts to encourage participation can’t be done. Phone interviews are best for providing anonymity.

Experimental research is usually used for answering questions about the effect of a treatment or intervention on some other variable whose values can be manipulated by the researcher. It is the strongest design for testing hypotheses.
In a hypothesis the Independent variable is considered to be the causal variable and the Dependent variable is considered to be the effected variable (outcome or result).
Essential components of true experimental design

A. At least two comparison groups of subjects (usually experimental and control group)

B. Variation in the independent variable occurs before assessment of change in the dependent variable

C. Random assignment to the two (or more) comparison groups
A combination of these components gives us more confidence in the validity of causal conclusions.
Confidence is increased with the addition of two other components:

D. Identification of the causal mechanism

E. Control over the context of an experiment
A true experiment must have at least one experimental group (subjects who receive some treatment or experimental manipulation) and at least one comparison group ( subjects to whom the experimental group can be compared. They receive a different treatment or no treatment. If no treatment is given this group is called a control group)
Addition component of true experiment - Posttest

F. All true experiments have a posttest – measurement of the outcome in both groups after the experimental group has received the treatment.
A true experiment does not require a pretest but can be advantageous. They provide measure of how much the experimental and comparison groups change over time. They demonstrate that randomization was successful and that chance factors did not lead to an initial difference between the groups. A pretest provides a more complete picture of the conditions in which the intervention had or didn’t have an effect.
Pretest-posttest control group design – two or more groups – at least one experimental and one control group and pre and post tests.
Random Assignment to groups (Randomization)
This is not the same as random sampling. Random Assignment versus Random Sample Both rely on selection by chance but the purpose differs

A. Random Assignment places pre-designated subjects into two or more groups

on the basis of chance.

B. Random Sampling selects subjects out of a large population on the basis of chance
If the comparison differed from the experimental group in anyway besides not receiving the treatment (or receiving a different treatment) a researcher would not be able to determine for sure what the unique effects of the treatment were.
In true experiment the subjects must be randomly assigned to the comparison group and experimental group. This eliminates systematic bias from occurring in the assignment to groups. The larger the group the less likely it is that even modest differences will occur o the basis of chance and the more possible it becomes to draw conclusions about causal effects from relatively small differences in the outcome.
When random assignment is used the odds of a difference between the comparison and

experimental groups by chance can be calculated. The occurrence of a difference by chance becomes very small for experiments with at least 30 subjects per group.

Matching- subjects are assigned to the different groups based on similarity of variables such as gender, age, year in school, or some other important characteristic.
In matching - you identify in advance all the important variables on which to make a match of the assignment to groups. It should be used in conjunction with randomization, but not instead of it.
If matching is used instead of random assignment then the research becomes a quasi-experiment instead of being a true experiment.
Limitations of True Experimental Designs
It is difficult to isolate the actual mechanisms by which treatments have their effects.
It is difficult to guarantee that the researcher has been able to maintain control over the conditions after they have been assigned to their groups. If conditions differ, the variation between the experimental and comparison groups will not be what was intended. This issue is less likely to be an effect in the lab experiment and more likely in the field experiment.
What are the criteria for identifying a cause in true experiments?

1. Association between the hypothesized independent and dependent variables

2. Time order of effects of one variable on the other. (Manipulation of Independent Variable occurred prior to effect in Dependent variable)

3. Non spurious relationships between variables. (extraneous influences can create misleading relationships)

4. Mechanism that creates the causal effect. (Is it ambiguous)

5. Context in which change occurs (how good is control)
Experiment is causal research - an intervention causes a change in the dependent variable

Conclusions can be invalid because of selection bias, endogenous change, effects of external events, cross-group contamination, treatment misidentification
Often testing a hypothesis with a true experimental design is not feasible. It may be too costly or time consuming or the desired setting or subjects may not be available.
In Quasi-experimental designs the subjects in the experimental and comparison groups are not randomly assigned.
In Nonequivalent Control Group Designs the experimental and control groups are designated before the treatment occurs and are not created by random assignment.
In Before and After Designs the design has a pretest and posttest but no comparison group. The subjects exposed to the treatment serve at an earlier time (in the pretest) as their own controls.

  1. Multiple Group Before and After Designs – several before and after comparisons are made involving the same variables but different groups and then the groups are compared

  2. Repeated Measures panel designs – the same group is observed many times (30 or more) receiving many pretests and posttests. This allows the researcher to study the process by which an intervention or treatment has an impact over time.

Quasi-Experiments only partially meet the criteria for identifying a cause but association between the IV and DV and the Context in which change occurs can still be met.
NonExperiments sometimes confused with Experimental designs but are not.
Ex Post Facto Control Group Designs – the groups are designated after the treatment has occurred.
One-shot case studies and many longitudinal designs can fall into this category.
Validity – True experiments are well suited to producing valid conclusions about causality but do not do as well in respect to generalizability
1. Internal Validity – noncomparability of treatment groups is a problem

a. selection bias – characteristics of the groups differ (attrition may be cause)

b. endogenous change – subjects develop or change during the experiment

c. history effects –something occurs during treatment which influences results

d. contamination – one of the groups is aware of the others and is influenced by this knowledge.

e. Treatment misidentification – some process that researcher is not aware of affects the treatment: expectations of experimental staff, self-fulfilling prophecy

Double Blind procedures can offset this effect (staff doesn’t know who is getting what)
Placebo Effect – patient improves on placebo because they think they are getting the real treatment
Hawthorn Effect – patients perform better just because they are part of a special experiment (Can also effect Evaluation Research – program clients know that the research findings may affect the chances for further funding.
Generalizability – Not easily achievable in experiment because it is difficult to apply the findings to some clearly defined larger population.
Experiments take place in an artificial setting.
Subjects are usually recruited or selected for the study, not chosen through random sampling.
2.External Validitycaused by things that happen outside of the experiment. The more that assignment to treatments is randomized and all experimental conditions are controlled, the less likely it is that the research subjects and setting will be representative of the larger population.
Ethical Issues in Experimental Research
Deception – subjects are misled – but deception is a critical component of many social experiments. If used then debriefing may be a good idea.

Violating this may mean harm to subject, lack of voluntary participation and informed consent.
Selective Distribution of Benefits- this also can cause harm to subjects. If the researcher really doesn’t know that one treatment is more effective then it is not as potentially harmful.

Download 39.4 Kb.

Share with your friends:

The database is protected by copyright ©ininet.org 2020
send message

    Main page